Sie sind auf Seite 1von 6

Yield-Area Optimizations of Digital Circuits Using

Non-dominated Sorting Genetic Algorithm (YOGA)

Vineet Agarwal Janet Wang


Electrical and Computer Engineering Electrical and Computer Engineering
The University of Arizona,Tucson. AZ The University of Arizona,Tucson. AZ
vagarwal@ece.arizona.edu wml@ece.arizona.edu

ABSTRACT ing variability while having constraints on area or delay [1].


With shrinking technology, the timing variation of a dig- Yield optimizations has also carried out using optimization
ital circuit is becoming the most important factor while tools [2] under similar constraints, through modular neural
designing a functionally reliable circuit. Gate sizing has networks [3], geometric programming [4] and also in pre-
emerged as one of the efficient way to subside the yield de- synthesis steps [5]. Yield issues for sequential circuits has
terioration due to manufacturing variations. In the past also been addressed in [6]. Song in [7] proposed an approach
single-objective optimization techniques have been used to in which gate sizing is done on statistical critical paths and
optimize the timing variation whereas on the other hand offers a trade off between area and variation deduction. But
multi-objective optimization techniques can provide a more most of these work concentrated on trade-off between area
promising approach to design the circuit. We propose a new overhead and timing variance reduction. Recently Orshan-
algorithm called YOGA, based on multi-objective optimization sky el al proposed an stochastic algorithm for power min-
technique called Non-dominated Sorting Genetic Algorithm imization under variability with a guarantee of power and
(NSGA). YOGA optimizes a circuit in multi domains and pro- timing yield [8]. Thus power now played an vital role in
vides the user with Pareto-optimal set of solutions which designing the circuit, while keeping an constraint on timing
are distributed all over the optimal design spectrum, giv- yield. Thus the traditional framework for optimization can
ing users the flexibility to choose the best fitting solution be summarized as:
for their requirements. YOGA overcomes the disadvantages
of traditional optimization techniques, while even providing min f (x)
solutions in very stringent bounds. Subject to g(x) ≤ gmax (1)
h(x) ≤ hmax
1. INTRODUCTION
Doing optimization through techniques just mentioned in
With recent shrinking of transistor feature size, uncer-
Equation (1), which has a single objective function, have
tainty in circuit response has become the most menacing
various disadvantages. First and foremost, the final solution
issue for the circuit designers. Data statistics indicate that
of single objective optimization process will depend on the
delay variation of a simple gate can go as high as 10%-15% of
constraints placed on other parameter (gmax , hmax ), which
the expected value. If no compensating techniques are em-
are user defined. Thus if these values are not properly chosen
ployed to encounter these variations, the yield of nanometer
then the resulting solution may not be the most appropri-
digital circuits can decrease drastically, thereby increasing
ate one. For example if we define penalty as the increase
the manufacturing cost by multiple folds. In recent past the
in g(x) and h(x) , and if the bounds are set very loose, the
technique of scaling the transistor W/L ratio, also known as
optimization may minimize f (x) while using all the penalty
gate sizing, has emerged as one of the convincing technique
it is allowed to incur. A second solution may be present
for combating these uncertainty due fickle fabrication pro-
with similar objective function value but with considerable
cesses. In this technique, various algorithms and heuristics
less penalty. In those cases the traditional techniques may
are used to scale transistor sizes, so as to decrease the timing
not deliver the second solutions and thus may not spot the
variations of the circuit and increase the yield. Thus gate
most ‘attractive’ solution. Secondly, all the optimization
sizing presents a promising approach to counter the diabolic
technique have been only single-objective function. Thus if
affects of manufacturing process uncertainty.
more than one objective have to be optimized, the tradi-
A survey of previous literature reveals that gate sizing
tional techniques, will optimize the first objective function
techniques have been in lime light for the past decade. The
first and then redo the optimization on this solution with
issue of calculating the timing yield and optimizing it has
second objective as primary aim. Thus it has to be done
been addressed numerous number of times. Most generally
in a ‘sequential’ sense. Multi-objective optimization pose
gate sizing problem has been viewed as minimizing a tim-
a better approach in which multi objectives are simultane-
ously optimized. Thus using multi-objective optimization
both f (x) and penalties on g(x), h(x) can be optimized
(minimized) simultaneously. Thirdly, if the bounds on g(x)
Permission to make digital or hard copies of all or part of this work for and h(x) are set too tight, then there may not exist any
personal or classroom use is granted without fee provided that copies are feasible solution which satisfies all the constraints. In such
not made or distributed for profit or commercial advantage and that copies cases, the traditional techniques will flounder ending up in
bear this notice and the full citation on the first page. To copy otherwise, to giving no solution. Thus optimization techniques are re-
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee. quired which can minimize multiple objectives, which are
Copyright 200X ACM X-XXXXX-XX-X/XX/XX ... 5.00. user independent and which don’t crumple under too strin-
gent constraints. Algorithm 1 NSGA Algorithm
Thus we propose our technique, Yield-area Optimization 1: pop ← GenerateInitialPopulation
using Genetic Algorithm, YOGA, which encounters the above 2: if generation ≤ max generation then
stated disadvantages. YOGA attempts to minimize all crite- 3: rank ← NonDominatedRanking(pop)
rion i.e, area, mean delay and delay variance simultaneously. 4: f itness ← FitnessAssignment(pop, rank)
YOGA at the simulation end provides the user with multiple 5: for i = 1 to N step 2 do
solution spread over the optimum design spectrum. All the 6: parent1 ← Selection(pop, f itness)
solutions in the final set are called Pareto-optimal solutions 7: parent2 ← Selection(pop, f itness)
[9, 10], in which none of the solutions are better than any 8: (child1 , child2 ) ← Crossover(parent1 , parent2 )
other in the set, thus any one solution can be chosen without 9: newpopi ← Mutation(child1 )
hesitation of optimality. Thereby it gives the designers the 10: newpopi+1 ← Mutation(child2 )
flexibility to choose the most appropriate solution according 11: end for
to their needs. YOGA’s application can be easily extended 12: pop ← newpop
to simultaneously optimize power consumption of a circuit. 13: generation ← generation + 1
Just by including an extra objective of minimizing power 14: end if
stochastically [8] the designers can get similar trade-off in- 15: f inal rank ← NonDominatedRanking(pop)
formation between power and timing yield and find the best 16: NonDominatedSolutions ← popi , ∀i ∈ f inal ranki = 1
fitting solution or otherwise get trade-off information of a 17: return NonDominatedSolutions
larger picture including area, power and timing yield all to-
gether.
Our rest of the paper is organized as follows. Section 2 3.1 Pareto-optimal Solution
provides a preliminary introduction to technique of gate siz- A solution is called Pareto-optimal solution, if there exists
ing. In Section 3, we describe the backbone algorithm of no other solution for which at least one of its criterion has a
Non-dominated Sorting Genetic Algorithm. Section 4 pro- better value while values of remaining criteria are the same
vides a demonstration of the technique while YOGA imple- or better. In other words, one can not improve any criterion
mentation for gate sizing is discussed in Section 5. The without deteriorating a value of at least one other criterion.
experimental results are listed in Section 6 and Section 7 Thus a design vector x∗ is Pareto-optimum if and only if,
concludes our paper. for an minimization problem and for any x and i,

2. BASIC GATE SIZING TECHNIQUE


fj (x) ≤ fj (x∗ ) ∀ j = 1, . . . , m, j = i
The traditional framework of gate sizing technique can be
⇒ fi (x) ≥ fi (x∗ )
summarized as:
f ind s where fi (x) is the value of ith objective function.
min σ 2 (dO ) NSGA gives rise to a set of such Pareto-optimal solutions.
(2) In a multi-optimization problem, the designer may be inter-
Subject to µ(dO ) ≤ µmax
π(G) ≤ πmax ested in more than one of such solutions because of the equal
quality of the solutions. The concept of NSGA is discussed in
where next section.
1. µ(x) is the mean and σ 2 (x) is the variance of the ran-
dom variable x. 3.2 NSGA Algorithm
2. s = {s1 , s2 , . . . , sn }, where si is the size of gate gi , The algorithmic flow of NSGA is summarized in Algorithm
G is the set of all gates and gi ∈ G. Size s for any 1. A Model multi-objective optimization problem can be
gate denotes, the ratio of area of the gate to that of a summarized as.
minimum sized inverter.
min fi (x) ∀i ∈ {1, M }
3. O is the set of all the outputs of the circuit. (3)
Subject to : xL U
j ≤ xj ≤ x j ∀j ∈ {1, P }
4. di is the delay of the ith output of the circuit and i ∈ O
5. π(G) denotes the area of circuit containing gates G where fi is the ith objective function for i = 1, . . . , M and
6. πmax and µmax are the maximum allowable Area and x = {x1 , . . . , xP } where xj is the j th design variable and
Mean output delay of the circuit. xj ∈ [xL U
j , xj ]. Henceforth all optimization will pertain to a

Thus a regular gate sizing algorithm will solve the non-linear minimization problem, unless otherwise stated. The various
optimization problem stated in Equation 2 and provide the steps in Algorithm 1 are explained below.
end user with a single s. The constraints place an upper
3.2.1 GenerateInitialPopulation Operator
bound on the penalty entailed while trying to optimize the
objective function, where penalty can be in form of area or This process generates pop, an array of size N × P , which
delay overhead. contains N (population size) members, each of which is a
vector for length P . This population is created by generating
3. NON-DOMINATED SORTING GENETIC uniformly distributed random numbers bounded by [xL U
j , xj ]
for each xj .
ALGORITHM
In this section we provide an overview of Non-dominated 3.2.2 NonDominatedRanking Operator
Sorting Genetic algorithm (NSGA) which will be used as the
base for our gate sizing algorithm. Extensive study has Definition 1. Say operator ‘’ denotes worse and ‘’
been done in multi-objective programming using genetic al- denotes better solution i.e, for a minimization problem a  b
gorithms in the past [9, 10, 11, 12]. One of the most influ- means a > b while for a maximization problem a  b means
ential work has been done by Deb in [13], in which he has a < b and vice-versa for ‘’ operator. Thus a solution xa is
coined the primary prototype of NSGA. said to dominate xb if both of following condition is true:
Algorithm 2 Non-Dominated Ranking Algorithm 3.2.4 Selection Operator
χ ← 1 {comment: pop is the current population and After fitness has been assigned to all the solutions of the
{xa , xb } ∈ pop} current population, one solution is randomly picked up from
repeat the pool. The probability of selection of a solution xa with
N ← length(pop) fitness value fa is given by [12]:
for a = 1 to N do
if ∃ b such that xb  xa then P (xa ) =
fa
push(xa , dominated)

N
fj
end if j=1
end for
for a = 1 to N do 3.2.5 Crossover Operator
if xa ∈
/ dominated then Crossover is the fundamental mechanism of genetic rear-
rankxa ← χ rangement for both real organisms and genetic algorithms.
remove(xa , pop) After two parents are selected by the Selection operator,
end if these two parents are crossed over with a probability of 0.9
end for to give rise to two new children. The Crossover operation
χ←χ+1 can be summarized as:
clear dominated
until pop = {∅} i. Calculate βq as:
return NonDominatedRanking ⎧ 1
⎨ ηc +1 1
(uα) , ifu ≤
βq = 1
α
⎩ ηc +1
1. xa is no worse than xb for any of the objective func- 1
( 2−uα ) otherwise
tions. In other words, fi (xa )  fi (xb ) for all j =
1, . . . , M . where ηc = 30, u is a random number between 0 and 1
and α = 2 − β −(ηc +1) where β is calculated as follows:
2. The solution xa is strictly better than xb in at least
one objective. In other words, ∃j for which fj (xa )  2
β = 1+ min[(xai − xL U b
i ), (xi − xi )]
fj (xb ). xbi − xai

Now for a set of N solution vectors, the non-dominated rank- ii. The children are then calculated as follows:
ing can be done according to Algorithm 2.  
yia = 0.5[(xai + xbi ) − βq xbi − xai]
3.2.3 FitnessAssignment Operator yib = 0.5[(xai + xbi ) + βq xbi − xai ]
After the population of size N has been ranked, fitness
values are assigned each of the solution. At first all the 3.2.6 Mutation Operator
solutions in a particular front (having same rank) are as-
Mutations modify a small fraction of the solution vari-
signed same fitness value (fk ) and then those fitness value is
ables: roughly one in every 10,000. Mutation alone does not
shared with other solutions in the same front. The sharing
generally advance the search for a solution, but it does pro-
procedure for a solution xa in the kth front is performed as
vide insurance against the development of a uniform popu-
follows:
lation incapable of further evolution. After the two children
i. Compute Euclidean distance measure with another so- are generated by the Crossover operator, they are mutated
lution xb in the kth front as: by using the polynomial probability distribution [13]. The
 following steps are used on each yi for i ∈ {1, P } with a
 P 
 xap − xbp 2 probability pm :
dab = 
p − xp
xU L
p=1
i. Parameter δq is calculated as:
ii. The Sharing function value is then computed as [12]: ⎧ 1

⎪ [2u + (1 − 2u)(1 − δ)ηm +1 ] ηm +1 − 1
  2 ⎨
if u ≤ 0.5
1 − dab , if dab ≤ ξ δq = 1
S (dab ) = ξ ⎪
⎪ 1 − [2(1 − u) + 2(u − 0.5)(1 − δ)ηm +1 ] ηm +1
0 otherwise ⎩
otherswise
0.5
where ξ ≈ √
P
10 where u is a random number between 0 and 1, δ =
iii. The niche count is then computed as: min[(yia −yiL ), (yiU −yia )], ηm = 100+iteration number.
pk
 ii. The mutated child portion is calculated as follows:
na = S(dab )
b=1 zia = yia + δq (yiU − yiL )
th
where pk is the number of solutions in the k front.
The mutation probability pm is linearly varied from P1 till
iv. Then the new fitness value is computed as: 1.0.
fk For minimization problems in NSGA, the objective function
fi = of the form fj (−

x ) ≤ tj is modified to min fj (−
→x )−tj where
na
the operator x returns x if x is positive or 0 otherwise. A
This procedure is repeated for all the fronts with fk+1 = much detailed explanation for these operators can be found
min(fk ) − κ where κ is a small positive value. in [10]
4. NSGA DEMONSTRATION
j
We shall now demonstrate the application of NSGA on sj
a sample non-linear multi objective optimization problem. fan−out1

The sample problem is formulated as: k


A fan−in1
1
sk
goal f1 = 10x1 ≤ 2 ≡ min f1 − 2 i
2 dgate
goal f2 = 10+(x 2 −5)
10x1
≤ 2 ≡ min f2 − 2 A
m fan−inm i
fan−out
n

Subject to F ≡ (0.1 ≤ x1 ≤ 1, 0 ≤ x2 ≤ 10) s


n
(4) n
Figure 1 shows a values of f1 and f2 for randomly chosen
values of x1 and x2 . It is noticed that there exists no feasible
solution to this problem, since no values of (x1 , x2 ) can make
both the objective function meet their goal. In this situation Figure 3: Depth based Nodal delay model for STA

5. YOGA IMPLEMENTATION
NSGA in [13] concentrated on convergence of such Pareto-
optimal set, while YOGA maintains wide diversity in the so-
lution set, so as to provide the designer with ample different
solutions distributed over design spectrum. Thus for using
NSGA in YOGA, the problem is formulated as in Equation 5.
f ind s
M inimize max (−
→σ O /−

µ O)
(5)
M inimize π (G)
Subject to sL U
j ≤ sj ≤ sj , ∀j ∈ [1, N ] , gj ∈ G
where O is the set of all the outputs of the circuit, G is the
set of all the N gates in the circuit, −

µ O is a vector of mean
delay values of all the outputs in O, −→
σ O is a vector of delay
deviation of all the outputs in O and gi is the ith gate with
Figure 1: Non-overlapping Criterion Space and De- a sizing factor of si .
cision space of the problem stated according to Thus the solution vector of YOGA will be set of vectors
Equation 4 each of length N , where each element of each solution will
correspond to the sizing factor of the respective gates.
the traditional optimization technique will cease to give any
fruitful results, and on the other hand the solutions provided 5.1 Gate Delay model
by NSGA are plotted in Figure 2. Thus NSGA provides solu- In the past, for the purpose of Statistical Timing Anal-
tions which are concentrated in a certain neighborhood and ysis (STA) of digital circuit, the delay was modelled as a
tries to bring both the objective functions near their goals random variable to incorporate the manufacturing uncer-
and also the tries to keep the penalty of the solutions to tainty. Song in [7] modelled the variation in delay as sum of
a minimum. Thus instead of providing no-feasible-solution, a factor directly proportional to delay and another random
it provides the solutions from the decision space which are factor whereas Raj in [2] assumed variation to be 15% of the
closest to the required non-feasible objective function point. expected value. In our work we model the delay as:
Thus the advantages of NSGA over the traditional techniques
di = max (Ai1 , . . . , Ain ) + dgatei + dloadi (6)
7
st
1 Generation solution
where di is the delay at wire i, (Ai1 , . . . , Ain ) are the arrival
times of fan-ins of node i, dgatei is the intrinsic delay of gatei
th
50 Generation solution

6 and dloadi is the delay incurred due to fan-out loads of node


i. The situation is depicted in Figure 3. Here the scaling
5 Decision factors (s1 , . . . , sN ) are modelled as random variables. In-
Space
tuitively it can be comprehended that a smaller the sizing
f2

4
factor, larger is the uncertainty in manufacturing. Thus the
factor (σ/µ) decreases with increasing size (where σs and
3
µs are the variance and mean of sizing factor respectively).
With this kind of delay model and delay characterization
2
Criterion (Section 5.2.1), delay at node i can be computed fairly ac-
Space curately by considering effects of arrival time, intrinsic delay,
1
1 2 3 4 5 6 7 the loading effect.
f1

5.2 Statistical Timing Analysis (STA)


Block based and event propagation techniques [14, 15]
Figure 2: NSGA solution to the Equation 4 are efficient ways for doing the STA for the circuit. Our
methodology is based on techniques mentioned in [15]. At
are evident from this example. YOGA can thus use NSGA in first arrival times are assigned to all the primary inputs of
order to do circuit optimization in the similar way, which is the circuits depending on the fan-out load for each of the in-
explained in further details in next section. puts. Then the whole circuit is divided into different depths.
The gates which receive their inputs from the primary in- 0.08
puts of the circuit are assigned a depth 1. If Fi is the set of th
50 Gen Soln
1st Gen Soln
fan-ins of the gate gi , and Di is the depths of members of Fi , 0.07

then the depth of a gate gi , is given by max (Di ) + 1. A gate 0.06


is not assigned its depth unless and until all the elements
in Fi have been assigned their corresponding depths. After

max(σ/µ)
0.05

the depth assignment is completed, the mean and variance 0.04


values of delay are computed for all the gates having depth
1. Then the delay values are propagated from depth 1 to 0.03

depth 2 and so on from depth k to depth k + 1. The delay 0.02


at the output of any gate gi is given by Equation 6. The
max (Ai1 , . . . , Ain ) operation is performed with the use of 0.01
20 30 40 50 60 70 80 90 100

formulation for z = max (x, y) [16] according to Equation 7. Area


High σ/Low π Design Spectrum High π/Low σ
a2 = σx2 + σy2 − 2σx σy ρ
α = (µx − µy ) /a
ν1 = µx φ(α) + µy φ(−α) + a ϕ(α) Figure 4: Results of YOGA on ISCAS C17 Benchmark
ν2 = (µ2x + σx2 )φ(α) + (µ2y + σy2 )φ(−α) + (µx + µy ) a ϕ(α)
(7)
where x ∼ N (µx , σx ) and y ∼ N (µy , σy ) and ρ = r(x, y)
 correlation operator. Thus we can get µz =
where r() is the This section presents the results of application of YOGA
ν1 and σz = ν2 − ν12 . algorithm on the variety of benchmark circuits of ISCAS
Thus the delay values are propagated from depth 1 till the family [17] and few circuits for which the types of gate in it
last front which are the primary outputs of the circuit. After were chosen arbitrarily. The STA was done using Berkeley
the mean and variance values are computed at the primary PTM [18] 70nm model cards for NMOS and PMOS transis-
output, the factor max (σ1 /µ1 , . . . , σL /µL ), where L is the tors. The results of YOGA on c17 circuit is shown in Figure
number of outputs of the circuit, is computed and returned 4. It can be inferred from the figure that given a random set
to YOGA as the value of one of it objective function. Though of gate sizes, YOGA delivers a variety of solutions at the de-
this is not a very accurate technique of doing STA, but since signers disposal. All the solutions marked with ‘+’ in figure
our main concentration in this work is the effectiveness of 4 consist the Pareto-optimal set in which no one solution is
YOGA, we can compromise with a less complicated model for better than any other. Few of the Pareto-optimal solutions
doing STA. for other benchmarks circuits are listed in Table 1.
The area of the whole circuit is simply calculated as : If traditional single objective function techniques were

N used to solve this optimization problem, in a way to mini-
Area(G) = αj sj mize max(σ/µ) and to keep total area under 120 units, then
j=1 the solution provided will be a single set of gate sizes which
will lie on the right extreme section of design spectrum of
where N is the number of gates in the circuit. sj is the size Figure 4. But it can be noticed from the figure, that other
of gate i which has a weighting factor of αj . The value of αj solutions exist which can minimize objective function to the
for any particular gate depends on the number of its input same amount but can still account for lesser area of the cir-
and the type of the gate. For example a 2-input XOR gate cuit. Single objective function techniques fail to see these
has an α value of 24 units while a 3-input NAND gate has solutions and thus may not provide solutions which are op-
α value of 15 units. The weighting factor α depicts the size timum in multi-sense. On the other hand, YOGA gives a wide
of a minimum sized gate of that type. range of solutions and designers can pick any of the pareto-
5.2.1 Delay Characterization optimal solution according to their need. For example low
area solutions can be selected from left section of design
For shorter run time while doing the STA of any circuit, spectrum, while least variation solutions can be picked from
delay characterization is done before hand. In this tech- right section. The pareto-optimal solutions provide the users
nique, the effect on delay at node i due to variations in si of with visible trade-offs for choosing their solutions. If they
gate gi and size of fan-outs can be captured simultaneously. don’t wish to loose much on both end of area and delay vari-
Initially a sample circuit is constructed consisting a chain ability, then they can choose solutions from middle sections
of 2 inverters which have size as sD , sR respectively. Then of spectrum. This flexibility is be provided by traditional
the sample circuit is simulated using SPICE for some set of techniques.
random values pairs of sD and sR and the delay of 1st in- For the case of c17 benchmark the designer has varied
verter is computed for each value pair. From Elmore delay solutions to pick any one of them according to their require-
analysis we conclude that the delay is of the form: ments. For example if the area is of more concern then he
   2
sR sR can choose solutions such as (area = 25.1485, max(σ/µ) =
d = κ1 + κ2 + κ3 . (8) 0.0769) or (area = 32.4708, max(σ/µ) = 0.0425), where
sD sD
he gets low area solution but has to compensate by get-
Any curve fitting technique, like Response Surface Method ting a high variance circuit along with. But on the other
(RSM) or Singular Value decomposition (SVD) can be used hand of timing variance is of more importance then he can
to estimate the three coefficients of Equation 8. This delay choose solution from the other end of the spectrum, such
model can be used for STA. If 50 different values are used for as (area = 71.5729, max(σ/µ) = 0.0193) or (area =96.1451,
characterization, then the error in estimated delay is around max(σ/µ) = 0.0163) where he has to compromise on area
0.0001% of the actual value, which is fairly acceptable. to get a low variance circuit. Whereas if he needs to keep
both in objective in control then he can choose the so-
6. EXPERIMENTAL RESULTS lution which are midway in the design spectrum such as
(area=54.8, max(σ/µ) = 0.0275).
Similarly, solutions from different portion of design spec- Table 1: Few solutions of Pareto-optimal set ob-
trum for any given circuit can be provided to suit the de- tained by YOGA for ISCAS benchmark circuits
Circuit No. of

N Run
signers needs and requirements. The only input which is re- max (σ/µ)
quired is circuit topology and the minimum and maximum Name gates Area = j=1 αj sj time(s)
allowable gate sizes. 25.1485 0.0769
32.4708 0.0425
c17 6 54.7947 0.0275 4.42
7. CONCLUSION 71.5729 0.0193
In this paper, we have shown the advantages of Multi- 96.1451 0.0163
objective optimization techniques over the traditional single 769.5831 0.0538
objective techniques. A new algorithm YOGA was proposed 1054.2467 0.0317
to use multi-objective optimization technique for gate sizing c432 160 2131.5073 0.0169 43.43
domain of digital circuits and flexibility obtained by it was 3028.1891 0.0142
demonstrated for various benchmark circuits. 3798.7005 0.0121
1418.9510 0.0231
2025.2311 0.0189
c499 202 3869.6068 0.0145 51.37
8. REFERENCES 4932.4860 0.0135
[1] S. H. Choi, B. C. Paul, and K. Roy, “Novel sizing 6420.8214 0.0126
algorithm for yield improvement under process 1818.4447 0.0618
variation in nanometer technology,” in DAC, 2004. 3102.6154 0.0432
[2] S. Raj, S. Vrudhula, and J. Wang, “A methodology to c880 383 4756.7919 0.0281 94.25
improve timing yield in the presence of process 6281.9218 0.0228
variations,” in DAC, 2004, pp. 448–453. 7554.7382 0.0205
[3] A.A.Ilumoka, “Optimal transistor sizing for cmos vlsi 2681.1763 0.0117
circuits using modular artificial neural networks,” in 3817.4057 0.0085
Twenty-Ninth Southeastern Symposium on System c1355 546 3965.4355 0.0068 137.18
Theory, 1997. 8815.9168 0.0057
[4] J. Singh, V. Nookala, Z.-Q. Luo, and S. Sapatnekar, 10672.4103 0.0042
“Robust gate sizing by geometric programming,” in 3218.8936 0.0247
DAC, 2005. 5476.0578 0.0202
[5] A. Nardi and A. L. Sangiovanni-Vincentelli, c1908 880 8981.3529 0.0163 227.20
“Synthesis for manufacturability- a sanity check,” in 12692.8132 0.0148
Design Automation and Test in Europe, 2004. 15834.0099 0.0131
[6] M. Pan, C. C. N. Chu, and H. Zhou, “Timing yield 6973.1389 0.0687
estimation using statistical static timing analysis,” in 9281.2028 0.0534
Intl Symposium on Circuits and Systems, 2005. c270 1193 16315.1987 0.0323 341.68
[7] O. Neiroukh and X. Song, “Improving the 22180.6038 0.0241
process-variation tolerance of digital circuits using 24872.2772 0.0213
gate sizing and statistical techniques,” in Design 7281.1988 0.0357
Automation and Test in Europe, 2005. 14672.0153 0.0248
[8] M. Mani, A. Devgan, and M. Orshansky, “An efficient c3540 1669 21915.7468 0.0162 470.75
algorithm for statistical minimization of total power 29180.4415 0.0151
under timing yield constraints,” in DAC, 2005. 34971.9318 0.0138
[9] C. M. Foncesa and P. J. Flemming, “Genetic 172.4660 0.0631
algorithms for multi-objective optimization: 275.4186 0.0421
Formulaton, discussion and generalization,” in 5th Intl circuit1 20 438.8642 0.0284 7.67
Conf on Genetic Algorithms, 1993, pp. 416–423. 569.5252 0.0231
661.2028 0.0208
[10] N. Srinivas and K. Deb, “Multi-objective function
252.6721 0.0648
optimization using non-dominated sorting genetic
628.8381 0.0408
algorithms,” in Evolutionary Computation, 1994.
circuit2 30 912.0196 0.0256 10.20
[11] A. G. Cunha, P. Oliveira, and J. A. Covas, “Use of
1181.6013 0.0237
genetic algorithms in multicriteria optimization to
1263.3795 0.0228
solve industrial problems,” in Proceedings of the
Seventh International Conference on Genetic
Algorithms, 1997, pp. 682–688.
statistical timing analysis by probabilistic event
[12] D. E. Goldberg, Genetic algorithms for search, propagation,” in DAC, 2001, pp. 661–666.
optimization, and machine learning. MA:
[16] C. Clark, “The greatest of a finite set of random
Addison-Wesley., 1989.
variables,” Operations Research, vol. 9, no. 2, 1961.
[13] K. Deb, “Non-linear goal programming using
[17] M. Hansen, H. Yalcin, and J. P. Hayes, “Unveiling the
multi-objective genetic algorithms,” Evolutionary
iscas-85 benchmarks: A case study in reverse
Computation Journal, 1994.
engineering,” IEEE Design and Test, 1999.
[14] A. Devgan and C. Kashyap, “Block-based static
[18] “Berkeley ptm.” [Online]. Available:
timing analysis with uncertainty,” in ICCAD, 2003.
http://www-device.eecs.berkeley.edu/∼ptm/
[15] J. Liou, K. Cheng, S. Kundu, and A. Krstic, “Fast

Das könnte Ihnen auch gefallen