Sie sind auf Seite 1von 7

IEEE - 31661

A Faster Genetic Algorithm to Solve Knapsack


Problem employing Fuzzy Technique

Shalini Mahato, Dr.Sanjay Biswas


Department of Computer Science and Engineering
Dr. B.C. Roy Engineering College,
Durgapur 713206, West Bengal, India
email: swarup.shalini@gmail.com, sanjoy.biswas@bcrec.org

Abstract—Knapsack problem is an optimization problem N


which is classified as NP-hard problem. Genetic algorithms Subject to ∑ wixi < C (1)
(GAs) are being used extensively in optimization problems as i =1
an alternative to traditional heuristics. The study aims at where xi ϵ { 0,1 }, i = 1,……,N
proposing some of the technique to find faster convergence and
better quality solution with the help of Genetic algorithm along wi and pi are the weight and profit of ith item. xi is a
with fuzzy. Firstly, every item is given a linguistic term so that binary variable which can have value 1 if item i is included
each and every item can be monitored separately. Then each in the knapsack, and 0 if it is not.
chromosome is classified as a whole. A new selection technique
is proposed to maintain the diversity of the population. The Knapsack problem is used in many practical
Crossover is done based on classification of chromosome. situations, such as cargo loading, capital budgeting,
Justification of when to do single point and when to do inventory control routing and project scheduling.
multipoint crossover is given. A new Repair operator is
Genetic Algorithm [1] is adoptive heuristic search
proposed to make the infeasible solution feasible. The proposed
algorithm does not stuck in the local maxima rather it explores
algorithm based on natural selection and genetics developed
the whole search space and new selection and crossover in the 1960s by John Holland.
technique is used to ensure that. The proposed technique is
II. LITERATURE REVIEW
compared with some other algorithm for solving Knapsack
problem published in literature The result indicate that the 0-1 Knapsack problem is a NP-hard problem. Various
proposed algorithm is effective in finding solution of better methods have evolved to solve 0-1 Knapsack problem like
quality and provide faster convergence. Bellman’s dynamic programming in 1950s, Kolesar
experiments in 1960s with the first branch and bound
Keywords—0-1 knapsack; genetic algorithm; fuzzy; hamming algorithm. But when the number of items is very large these
distance; repair operator methods are unable to solve the problem.
I. INTRODUCTION In this, we are trying to solve the Knapsack problem with
The Knapsack problem is a combinatorial optimization the help of Genetic Algorithm along with fuzzy. In genetic
problem where one has to maximize the benefit of objects in algorithm, size of the population plays an important role in
a knapsack without exceeding its capacity. It is a NP- finding a good solution. If the size of the population is too
complete problem and as such, an exact solution for a large small the solution would be poor as the diversity in the
input is practically impossible to obtain. We would try to population is less and if the size of the population is too large
solve this problem by Genetic algorithm along with fuzzy. computation time will be very high to find good solution.
Fuzzy would help genetic algorithm to find optimal solution According to F. G. Lobo and D. E. Goldberg [2] population
faster and of better quality. size is directly dependent on the problem’s difficulty but it is
tough to estimate the difficulty of the problem so one needs
A. Mathematical Representation to keep experimenting with different population size.
The 0-1 Knapsack problem is the problem of choosing a Genetic algorithm suffers from premature convergence.
subset from the N items such that the corresponding profit Nicoara [4] applied genetic operators based on average
sum is maximized without having the weight sum to exceed progress and partial population re-initailization to preserve
the capacity C. genetic diversity which would avoid premature convergence.
N
If premature convergence tends to reach closer to the critical
Maximize ∑ pixi level then it is better to regenerate the population by some
i=1 randomly generated new chromosomes and replacing few
older ones with it. Srinivas and Patnaik [5] have adoptively

4th ICCCNT - 2013


July 4 - 6, 2013, Tiruchengode, India
IEEE - 31661

varied the crossover and mutation probabilities depending on A. Individual Item Monitoring
the difference between average and best fitness in population Each item present has its own weight and profit. For an
to maintain the diversity in the population. Mauldin [6] item to be included in Knapsack, we judge the goodness of
maintained a minimum Hamming distance between any each item on the basis of profit (maximum profit) or weight
offspring and all existing chromosomes in the population in (minimum weight) or profit per unit weight (maximum profit
order to maintain diversity. Jalali and Lee [7] introduced a per unit weight) as per Greedy algorithm.
new sexual selection technique. They divided the population
into male and females. Females were selected using While all three approaches generate feasible solutions,
tournament selection and males were selected based on we cannot guarantee that which one of them will always
hamming distance, active gene or fitness value avoiding generate the optimal solution. All the above three methods
premature convergence. Jalali and Lee [8] used fuzzy logic has certain drawback. It may happen that profit of an item is
controller in the selection of the crossover operator and its high but weight of that item is very high, so we cannot
probability. consider that item to be good item. Similarly, if the weight of
an item is low but profit is very low then also we cannot
Syswerda [9] had mentioned that greater the number of consider that item to be good. Generally profit per unit
crossover point, the more beneficial it is. De Jong and Spears weight (pi /wi) gives a good solution.
[10] in their study had compared different kind of crossover
and mutation operators. Jassadapakorn and Chongstitvatana If we have two items having the same (pi /wi) ratio but
[11] used hamming distance to maintain population different contribution to profit, then for us item having more
diversity. Chu and Beaseley [12] used repair operator based profit contribution will be valuable. So we need to choose an
on greedy algorithm to obtain feasible solution. According to item on the basis of (pi /wi) ratio along with their profit
Chou [13] better solution is obtained if crossover probability contribution. Thus, we have proposed a new technique to
of 1.0 is used along with uniform crossover. value an item on the basis of Titem i , where

III. PROPOSED METHOD


Faster Genetic algorithm is used to solve 0-1 Knapsack Titem i = Tprofit i x Tprofit i/weight i (2)
problem employing Fuzzy Technique.
profit i
Algorithm of Faster Genetic Algorithm:
Tprofit i = ---------------------- (3)
Procedure FasterGA { max (profiti)
i ϵ (1,…..,N)
t= 0
Give Linguistic Label to each Item i.e High, Low or Medium (profit i / weight i)
Randomly initialize Population(t) Tprofit i / weight i = -------------------------- (4)
max (profit i / weight i)
Evaluate Population(t) i ϵ (1,…..,N)

While (not termination condition)


{ where N =Total number of items and i = ith number item
t= t+1 Here, Tprofit i/weight i is multiplied with Tprofit i so that
Classify each individual to Good, Average, Bad; we can magnify the value of items having good
Select parents from Population(t) using Diverse Selection; contribution to profit and suppress those items having
less contribution.
for each pair of parents {
After that linguistic term to each of the item is given
if both parents belongs to same category according to value of Titem i as shown in the figure
then multipoint crossover with probability Pc; below. For each linguistic term we have triangular or
if both parents belongs to different category trapezoidal fuzzy set associated with it. Three linguistic
terms Low, Medium and High have been used. A
then single point crossover with probability Pc; membership function is used to quantify each linguistic
Mutation with probability Pm ; term.
Use Repair operator for infeasible solution; Each item is assigned value Low, High or Medium
Insert offsprings into Population(t); based on which fuzzy set it has maximum membership.
Evaluate Population(t); We can monitor each and every item in the
} chromosome based on the linguistic label assigned to
the item.
}
}

4th ICCCNT - 2013


July 4 - 6, 2013, Tiruchengode, India
IEEE - 31661

E. Diverse Selection
Diversity in the population is very important for
convergence to the global optimal solution. In order to
maintain the diversity in the population a new selection
technique, Diverse Selection is proposed. Diverse Selection
is based on the genotype of the chromosome and Hamming
Distance between two chromosomes is used.
1) Hamming Distance(HD):-Hamming Distance is
defined as the number of bits at which two chromosomes are
different. Let Ci = {ci1, ci2, . . , ciN } and Cj ={cj1, cj2, . . , cjN }
be two chromosomes, N= Length of the chromosome, then
Fig. 1 Set of Linguistic terms associated with Titem i
Hamming Distance between Ci and Cj is defined as [8].
B. Problem Representation N
N-bit binary string to represent a solution where N is the HD (Ci,Cj) = ∑diff(cik,cjk) (5)
total number of items has been used. k=1
C. Initial Population where diff(cik,cjk) = 0, if cik = cjk
Initial population is generated by random population diff(cik,cjk) = 1, if cik ≠ cjk
generation. The size of the population is fixed in each
generation. Here, Dn is similar to the technique proposed by Jalali and
D. Classification of Chromosome Lee [7]
A new technique to classify a chromosome into Good, Dn = HD(Ci,Cj) /Total Length of the String(N) (6)
Bad and Average based on the contribution towards profit of where 0 <= Dn <= 1
each fuzzy set has been proposed.
Algorithm for Classification of Chromosome:
2) Algorithm for Diverse Selection
Let
Sum_profit_Low = sum of profit of all Low items present in Step 1: Sequentially search for the first unselected
the chromosome; chromosome from the population
Step 2: Find the value of Dn with other chromosomes
Sum_profit_Med = sum of profit of all Medium items in the population sequentially until we find a chromosome
present in chromosome whose value of Dn >= 0.5
Sum_profit_High = sum of profit of all High items present Step 3: If no chromosome is found with Dn >= 0.5,
in the chromosome; then we select a chromosome with maximum value of Dn .
Category[i] = array of character containing ‘G’, ‘B’, ‘A’ Step 4: If the selected chromosome in step 1 is Bad
which represents category Good, Bad, Average for each and chromosome selected using step 2 or step 3 is also Bad,
solution i in the population where i ϵ I then we reject that chromosome, we continue our search for
suitable chromosome using step 2 and 3.
Population= Total number of chromosome in the population Step 5: If using step 4, no Good or Average
1. for i=1 to Population chromosome is found, Bad chromosome is selected with
maximum Dn
2. if ( Sum_profit_Low > Sum_profit_Med) and (
Sum_profit_Low > Sum_profit_High) then
Step 4 ensures that crossover between two Bad
3. Category[i]= ‘L’; chromosomes is minimum as it decreases probability of
giving an optimal solution. Diverse Selection helps in
4. elseif(Sum_profit_High> Sum_profit_Med) and
maintaining the diversity of the population which in terms
(Sum_profit_High > Sum_profit_Low) then
helps in finding solution faster and better.
5. Category[i]= ‘H’;
6. else 3) Algorithm for Random Selection
7. Category[i] = ‘A’
In Random Selection, we randomly select both the parents
8. end for from the population and do the cross over among them. We

4th ICCCNT - 2013


July 4 - 6, 2013, Tiruchengode, India
IEEE - 31661

use Random Selection to randomize the population when we <½


get stuck in some local maxima. So, if we do a single point cross over between Good string
and Bad string then the probability that less than 1/4th of the
4) Algorithm for Selecton of either Diverse Selection or Good will be corrupted is < 1/2.
Random Selection But the probability that more than 1/4th of Good is corrupted
is equal to
Let: gen = Generation number > 1— ½
Fittest[gen] = Profit of fittest chromosome in gen >1/2
if (gen < 3) Now suppose that we are going to do 2-point crossover
Diverse_selection between a Good and Bad chromosome.
else
{
if (fittest[gen -1] > fittest[gen-2] and fittest[gen-2] >
fittest[gen-3])
Random_selection
else
Diverse_selection
}
The above algorithm ensures that whenever our algorithm
gets stuck in the local maxima due to the use Diverse Fig. 3. Two point crossover between Good-Bad chromosome
Selection, Random Selection helps in coming out of such
situation. In this case the crossover points in the lower domain must lie
within N/8 and in higher end of the string it must lie within
F. Crossover the last N/8th
We would always do Multipoint crossover between The probability the 1st crossover point falls within lower N/8
chromosome of same category i.e between Good-Good, i.e before point ‘a’ is < 1/8
Average-Average, Bad-Bad. We would do single point
crossover between chromosomes of two different category The probability that the 2nd crossover point falls in higher
i.e Good-Average, Good-Bad, Bad-Average. N/8 i.e after point ‘b’ is given
< (1/8) / (1 — 1/8)
1) Why Multipoint crossover between same category
< (1/8) / (7/8)
chromosome and single point crossover between different
< 1/7
category chromosome
The part of Good chromosome that gets corrupted by Bad
Proof: Let N be the length of string. chromosome is given by
< N/8 + N/8
< N/4
So the probability in this case if there is crossover between a
Good and Bad that less than 1/4th of the Good string is
corrupted is
< (probability of crossover point before ‘a’) AND
(probability of crossover point after ‘b’)
< 1/8 * 1/7
<1/56
Fig. 2. One point crossover between Good-Bad chromosome So the probability that more than 1/4th of Good string is
corrupted is
Suppose we are doing 1-point crossover then the crossover > 1— 1/56
point will lie in lower N/4th or higher N/4th i.e before point > 0.982
‘a’ or after point ‘b’.
Thus we can say that probability of corruption of Good
The probability that the crossover point will lie in lower chromosome from Bad chromosome is higher in multipoint
N/4th or higher N/4th is < (probability of crossover point crossover as compared to single point crossover.
before ‘a’) OR (probability of crossover point after ‘b’) is
< 1/4th + 1/4th

4th ICCCNT - 2013


July 4 - 6, 2013, Tiruchengode, India
IEEE - 31661

So we can say single point crossover gives a relaxed bound //ADD phase
for a crossover between Good and Bad chromosome.
9. while Total_wt < C ,do
Similarly we can say for crossover between Good and
Average chromosome and Average and Bad chromosome. So 10. for c1=N to 1 do
we always have single point crossover between chromosome 11. if (chromosome[i][c1] ==0) and Total_wt +
of different category. Wi ≤ C then
According to Syswerda [9], Chou [13] more diverse
12. Set chromosome[i][c1] to 1
population is generated in case of uniform crossover. So we
do uniform crossover between the chromosome of the same 13. Set Total_wt to Total_wt+ Wi
category i.e between Good-Good, Average-Average and
14. End if
Bad-Bad.
15. End for
G. Mutation
According to Arid Hoff [3] the best choice for mutation 16. End while
is flip mutation with the probability of 1/N, where N is the
total number of item. So the same is used. DROP phase helps in attaining a feasible solution out of the
H. Repair Operator infeasible one and ADD phase improves the fitness of the
chromosome. Thus the entire chromosome becomes feasible
After crossover and mutation we find that some of the
and improved with the help of above algorithm.
offspring generated may over flow the Knapsack Capacity.
So in order to make it feasible we discard some of the items IV. EXPERIMENTS AND ANALYSIS
from the chromosome based on the value of Titem i (DROP
phase). Then we keep on adding items in the chromosome In our algorithm, Faster Genetic Algorithm we have
based on the value of Titem i (ADD phase) until it is feasible. considered crossover probability Pc = 1 and mutation
probability P m = 1/N, where N is the total number of items.
Chu and Beaseley [12] used genetic algorithm along with No. of generation = 100.
a repair operator to solve Multidimensional Knapsack
Problem. To convert infeasible solution to feasible, Chu and We are comparing our algorithm with Standard Genetic
Beaseley used repair operator based on simple greedy Algorithm (SGA) [1] [14], Greedy Algorithm (GA) [14],
algorithm. Our repair operator is similar to that of Chu and Greedy Genetic Algorithm (GGA) [14], Adoptive Genetic
Beasely but we have used Titem i. Here, all the items are Algorithm (AGA) [6] [15] and Modified Hybrid Adaptive
arranged in the ascending order of Titem i inorder to perform Genetic Algorithm (MHAGA) [15]. The data sets given
the Repair operation. below have been used extensively to solve 0-1 Knapsack
Problem.
Algorithm for Repair Operator:
Here,
Let: N=Total number of items
C =Knapsack Capacity C=Knapsack Capacity
Total_wt= Total weight of selected items W=Weight of N items
N=total number of items V=Value of N items
th
chromosome[i]=i number solution for Knapsack Data set 1 [14]:
th
Wi= weight of i item N = 10, C = 269,
//DROP phase W={95, 4, 60, 32, 23, 72, 80, 62, 65, 46}
1. While Total_wt > C V={55, 10, 47, 5,4, 50, 8, 61, 85,87}
2. for c1=1 to N do Data set 2 [14][15][16]:
3. if chromosome[i][c1] equals 1 then N = 20, C = 878
4. Set chromosome[i][c1] to 0 W={44, 46, 90, 72, 91, 40, 75, 35, 8, 54, 78, 40, 77, 15, 61,
17, 75, 29, 75, 63}
5. Set Total_wt to Total_wt-Wi
V={92, 4, 43, 83, 84, 68, 92, 82, 6, 44, 32, 18, 56, 83, 25, 96,
6. End if 70, 48, 14, 58}
7. End for
8. End while

4th ICCCNT - 2013


July 4 - 6, 2013, Tiruchengode, India
IEEE - 31661

1) Comparative study between Faster Genetic Algorithm In GGA, crossover-mutation operator and an improved
(FzGA) and Standard Genetic Algorithm (SGA) [14] [1] selection operator is used. Greedy algorithm is used for
data set 1and 2. infeasible solution [14].
TABLE I: RESULTS FOR SGA AND FzGA TABLE IV: RESULTS FOR GGA AND FzGA
Algorithm Best result for Effective Generation Algorithm Best result for Effective Generation
data set 1 data set 1
SGA 295 37 GGA 295 5
FzGA 295 1 FzGA 295 1

For data set 1: Standard Genetic Algorithm (SGA) and Faster


Both gave optimal result of 295 but FzGA showed faster
Genetic Algorithm (FzGA) both produce optimum results.
convergence as compared GGA as it found that FzGA gave
But FzGA has faster convergence speed. SGA attained the
result in average generation of 1 where as GGA gave the
result in 37th generation where as FzGA attained the result
same result in average generation of 5.
quiet early in the 1st generation itself.
Thus we can conclude that FzGA gives better result as
TABLE II: RESULTS FOR SGA AND FzGA
compared to GGA.

Algorithm Best result for Effective Generation Analysis for data set 1:
data set 2
SGA 1037 100
Chromosome 0111000111 gives the optimal result of 295
FzGA 1042 25 Knapsack capacity: 269

For data set 2: FzGA attained better solution and also faster Best fitness achieved in the generation: 1
convergence than that of SGA. SGA attained the result as Analysis for data set 2:
1037 in 100th generation where as FzGA attained 1042 as
optimal solution in the 25th generation. Chromosome 10111111010111111101 gives the optimal
result of 1042
Thus from the above comparison it is concluded that FzGA
is superior to that of SGA. Knapsack capacity: 878
2) Comparative study between Faster Genetic Algorithm Best fitness achieved in the generation: 25
(FzGA) with Adaptive Genetic Algorithm (AGA) [6][15]
and Modified Hybrid Adaptive Genetic
Algorithm(MHAGA) [15] for data set 2.
In AGA, to maintain the diversity in the population,
crossover and mutation probabilities are are changed based
upon the difference between average and best fitness in
population [6].
In MHAGA, AGA is further improved with the use of
diversity guided mutation and greedy transform algorithm to
repair infeasible solution [15].
TABLE III: RESULTS FOR AGA, MHAGA AND FzGA
Algorithm Best result for data set 2
AGA 1027 Fig. 4. Fitness v/s Generation for data set 1 for FzGA
MHAGA 1037
FzGA 1042

For date set 2, AGA gave 1027,MHAGA gave 1037 as the


best result where as FzGA gave much better result of 1042.
Thus we can conclude that FzGA gives better result as
compared to AGA and MHAGA
3) Comparative study between Faster Genetic Algorithm
(FzGA) with Greedy Genetic Algorithm (GGA) [14] for
data set 1.

Fig. 5. Fitness v/s Generation for data set 2 for FzGA

4th ICCCNT - 2013


July 4 - 6, 2013, Tiruchengode, India
IEEE - 31661

V. CONCLUSION [7] M. Jalali. Varnamkhasti and L. S. Lee, “A genetic algorithm based on


sexual selection for the multidimensional 0/1 knapsack problems,” 10
From the above experiments and results we can conclude pages, International Journal of Modern Physics
that Faster Genetic Algorithm provides faster convergence [8] Jalali Varnamkhasti M, Lee LS,” A Genetic Algorithm with Fuzzy
and better quality results. The Diverse selection helps in Crossover Operator and Probability”,Advances in Operation
maintaining the diversity of the population and getting Research, Hindawi Publishing Corporation,2012.
solution faster and better. The crossover technique helps us [9] Syswerda, G. (1989). “Uniform Crossover in Genetic
Algorithms”.Proceedings of the Third International Conference on
in finding better solution. The Repair operator helps in Genetic Algorithms, ed. J. D. Schaff er, San Mateo, CA: Morgan
making infeasible solution feasible and helps in providing Kaufmann, 2-8.
better solution. Thus the proposed algorithm is effective in [10] K. A. De Jong and W. M. Spears, “A formal analysis of the role of
finding better solution. multi-point crossover in genetic algorithms,”Annals of Mathematics
and Artificial Intelligence, vol. 5, no. 1, pp. 1–26, 1992.
REFERENCES [11] C. Jassadapakorn and P. Chongstitvatana, “Diversity control to
improve convergence rate in genetic algorithms,” in Proceedings of
the 4th International Conference on Intelligent Data Engineering and
[1] David E. Goldberg (1989) : Genetic Algorithms in Search, Automated Learning (IDEAL 2003), vol. 2690 of Lecture Notes in
Optimization and Machine Learning,Pearson Computer Science, pp. 421–425, 2003
[2] F. G. Lobo and D. E. Goldberg, “The parameter-less genetic [12] P. C. Chu and J. E. Beasley, “A genetic algorithm for the
algorithm in practice,” Information Sciences—Informatics and multidimensional knapsack problem” Journal of Heuristics, vol. 4, no.
Computer Science, vol. 167,no. 1-4, pp. 217–232, 2004. 1, pp. 63–86, 1998.
[3] Arild Hoff , Arne Løkketangen , Ingvar Mittet “Genetic Algorithms [13] Chou H, Premkumar G, Chu CH (2001). “Genetic Algorithms for
for 0/1 Multidimensional Knapsack Problems (1996)” in Proceedings communications network design-an empirical study of the factors that
Norsk Informatikk Konferanse, NIK '96 influence performance.” IEEE Trans. Evolu. Comp.
[4] Nicoara Elena Simona “Mechanisms to Avoid the Premature [14] Yuxiang Shao and Hongwen Xu (2009)“Solve Zero-One Knapsack
Convergence of Genetic Algorithms”- Matematică - Informatică – Problem by Greedy Genetic Algorithm” IEEE transcactions on
Fizică Vol. LXI No. 1/2009,pp- 87 – 96,2009
Intelligent Systems and Applications, 2009. ISA 2009. International
[5] M.L.Mauldin,”Maintaining diversity in genetic search”,in:Proc of Workshop on
Nat. Conf. on Art Int.(Austin,TX,1984)247-250
[15] Yanqin Ma “The Modified Hybrid Adaptive Genetic Algorithm for
[6] Srinivas and Patnaik “Adaptive Probabilities of Crossover and 0-1 Knapsack Problem” IEEE Trans, Control and Decision
Mutation in Genetic Algorithm” IEEE transactions on systems, man Conference (CCDC), 2012 24th Chinese
and cybernetics, Vol. 24, no. 4, April 1994

4th ICCCNT - 2013


July 4 - 6, 2013, Tiruchengode, India

Das könnte Ihnen auch gefallen