0 Bewertungen0% fanden dieses Dokument nützlich (0 Abstimmungen)

7 Ansichten22 SeitenFeb 10, 2011

© Attribution Non-Commercial (BY-NC)

PDF, TXT oder online auf Scribd lesen

Attribution Non-Commercial (BY-NC)

Als PDF, TXT **herunterladen** oder online auf Scribd lesen

0 Bewertungen0% fanden dieses Dokument nützlich (0 Abstimmungen)

7 Ansichten22 SeitenAttribution Non-Commercial (BY-NC)

Als PDF, TXT **herunterladen** oder online auf Scribd lesen

Sie sind auf Seite 1von 22

Routing Optimization

Dominik R. Rabiej

Mattawan, Michigan

November 29, 2000

Dominik R. Rabiej

Summary

This study creates and analyzes Greedy Random, the rst successful problem-independent al-

gorithm for optimizing vehicle routing, the scheduling of multiple deliveries to various clients.

Existing vehicle routing optimization techniques are problem-specic. After programming

ten algorithms, an initial experiment revealed Greedy Random as the best performing algo-

rithm. Further experiments analyzed Greedy Random's success. Greedy Random surpasses

current techniques in ease of applicability and in scope of use in other optimization problems

such as planning and layout.

Dominik R. Rabiej 1

Abstract

This study creates and analyzes a novel technique for optimizing Capacitated Vehicle Routing

with Time Windows (CVRTW). In this new approach, ten algorithms each independently

drove a generic CVRTW engine. After programming the ten dierent algorithms, an initial

experiment compared them against each other on a set of standardized benchmarks. The

best algorithm, Greedy Random (GR), performed signicantly better than the other nine at

p = 0 025. Four more experiments elucidated the reasons for GR's success. Each experiment

:

benchmarks. Analysis of the experimental results found that GR derived its success from

its use of a single catalytic initial random move to modify the solution, reoptimize and

approach the optimal solution. Greedy Random is the rst successful problem-independent

optimization algorithm. Because of this, advances in vehicle routing can now be easily

applied to other areas of optimization.

1 Introduction

The goal of vehicle routing is to schedule multiple deliveries to various clients. Vehicle

routing has existed since the advent of the Industrial Age, when large-scale production and

supply became possible. As the complexity and scale of the manufacturing world increased,

the task of optimizing vehicle routing grew.

This study examined Capacitated Vehicle Routing with Time Windows (CVRTW), which

routes vehicles that each carry a specic capacity of product to dierent customers with

varying availabilities (time windows) and varying demanded amounts of product. By taking

into account capacity and time windows, CVRTW generates solutions that have real-life

applications [4].

CVRTW is a NP-Hard problem, a member of \a complexity class of problems that are

intristically harder than those that can be solved by a non-deterministic Turing machine in

Dominik R. Rabiej 2

polynomial time" [2]. CVRTW is especially dicult since the number of possible solutions

grows exponentially with the size of the problem.

Solomon's 1987 paper initiated research into CVRTW by establishing a standard set

of optimization benchmarks [11]. Using those benchmarks, numerous techniques have opti-

mized CVRTW, including genetic algorithms [12], tabu search [6], probabilistic searches [10],

constraint programming [5], exact algorithms [3], metaheuristics [7], multiple improvement

heuristics [9] and Human-Guided Simple Search [1].

In Human-Guided Simple Search (HuGSS), the most recent of these techniques, a human

user and a computer work together on optimizing a solution. HuGSS allows the human user

to have a broad overview of the solutions calculated by the computer (the CVRTW Engine).

From this vantage point, the human user can eectively drive the CVRTW Engine to optimize

the CVRTW solution [1].

This study creates a new perspective on the optimization process by separating the

CVRTW Engine and the optimization algorithm. It began by programming ten unique

algorithms in place of a human user. Each algorithm used a dierent technique to drive

the CVRTW Engine. After these ten algorithms ran against each other in the Algorithm

Comparison Run (Section 3.1), one algorithm emerged with the best performance. Next,

four experiments investigated its success by comparing variants of the algorithm on a set of

standardized benchmarks.

2 Experimental Components

2.1 CVRTW Solution

Figure 1 presents a visualization of a CVRTW solution [1]. The large central circle represents

the depot from where the vehicles depart. The smaller circles represent the various customers

and the connecting line segments represent the vehicle routes. The pie charts within the

smaller circles represent the availabilities (time windows) of the customers. To reduce visual

Dominik R. Rabiej 3

y

Key

Central Depot

Customer

Vehicle Route

Closed

Open

A CVRTW Solution is a set of customers. Each customer has a location ( c c), a

n c x ;y

time window [ copen cclose ] and a demanded amount of product, c. vehicles service these

t ;t p V

n customers. Each vehicle carries amount of product and travels a total distance v :

v k d

from the depot, to all of its customers (the set f v1 vq g) and back to the depot. Each

q c :::c

p t

c; t t ;t v; c c :::c ; p k

(all customers receive their shipment within their time windows and no vehicle runs out of

product). If these conditions are not met, a solution is infeasible.

The cost of a solution is the number of vehicles, . If is equivalent for two solutions,

V V

Dominik R. Rabiej 4

then P v (the aggregate distance that the vehicles travel) is used as a tie-breaker. An

d

The goal of optimization is to nd a solution with minimal cost.

Each algorithm operated in discrete intervals of time known as cycles. Within a cycle,

the algorithm invoked the CVRTW Engine to optimize the solution. The CVRTW Engine

systematically searched through possible ways to reduce the cost of the solution. It rst

considered 1-ply moves, the relocation of a customer from one vehicle route to another.

The CVRTW Engine then considered multiple-ply moves, which were simply series of 1-ply

moves. It considered 2-ply moves, 3-ply moves, 4-ply moves and such until it reached the

cycle time limit.

The CVRTW Engine had two primary search methods: greedy search and steepest search.

These search methods determined how the CVRTW Engine applied improvements that it

found. If the CVRTW Engine used greedy search, it immediately implemented any improve-

ment found and restarted itself with the new solution. This improvement is known as a

greedy move. In contrast, if the CVRTW Engine used steepest search, it did not implement

any improvement until the end of the cycle time limit. Then, it applied the move that

minimized the cost of the solution. Regardless of the search method, the CVRTW Engine

attempted to minimize the number of vehicles and the aggregate distance traveled.

To focus searches, HuGSS dened priorities for customers [1]. Priority referred to the

scope of the search and not to the importance of the customers in receiving deliveries. There

were three dierent levels of priority for customers: high, medium and low. Human users

applied priorities to search deeper in a particular region of the solution. For example, to

focus the search on a group of customers, they set those customers to high priority and the

rest to medium or low priority.

In much the same way, the algorithms also used priorities to focus searches. An algorithm

Dominik R. Rabiej 5

set the customer to high priority to allow the CVRTW Engine to move a customer o its

current vehicle route onto a dierent vehicle route. Similarly, to prevent the CVRTW Engine

from moving a customer o its current route, an algorithm set the customer to medium or

low priority. Customer priorities also aected whether the route accepted new customers.

If an algorithm set any customers on a route to low priority, then the CVRTW Engine did

not move any customers onto that route. The CVRTW Engine only moved customers onto

routes consisting entirely of high and medium priority customers.

Priorities helped reduce the complexity of the search. In one case, focusing the search on

20 of 100 customers decreased the number of 1-ply moves by a factor of 30, 2-ply moves by

a factor of 222, and 3-ply moves by a factor of 18,432 [1].

All experiments used Solomon's Random Clustered (RC) benchmarks RC101 through RC108

[11]. Each of the RC1 benchmarks has 100 customers and starts with 100 vehicles (one vehicle

servicing each customer).

To pre-compute the starting solutions, the CVRTW Engine optimized using greedy

search. It ran for 90 minutes on a Pentium III 867 MHz Linux machine with 512 MB

of RAM.

This study evaluated ten dierent algorithms, each written in C++ and integrated with

the CVRTW Engine. The algorithm that performed signicantly better than all others at

p = 0 025 was Greedy Random (GR).

:

Figure 2 summarizes GR's logic sequence. The gure does not illustrate the central depot

or customers' time windows. At rst, GR sets all of the customers to high priority, so that

it will consider all possible cases. It then randomly selects one customer and moves that

Dominik R. Rabiej 6

customer from one vehicle route to another (Step 2). This is GR's initial random move.

Usually that customer relocation will make the solution infeasible (85.9% of the time), as in

Step 3. Sometimes the solution will remain feasible. Regardless, GR then sets the moved

customer to medium priority, so that the CVRTW Engine cannot move it again. Then,

GR invokes the CVRTW Engine to reoptimize the solution for a cycle (Step 4). If the new

solution is better than the original solution (Step 1), then the solution is used. If not, it is

discarded. If the CVRTW Engine cannot nd a feasible solution within the cycle time limit,

the solution is discarded.

Dominik R. Rabiej 7

2.5 Algorithms

Aside from GR, this study used nine other programmed algorithms. Like GR, these al-

gorithms were all improvement algorithms working to optimize a solution from a starting

point.

The logic sequence of each algorithm may be summarized as follows:

High Priority (HI)

1. Set all customers to high priority.

2. Optimize using a greedy search.

Steepest Climbing (SC)

1. Set all customers to high priority.

2. Optimize using a steepest search for a cycle.

3. If the solution is better, use it. Otherwise, discard it.

4. Repeat.

Random Priorities (RP)

1. Randomly set all customers to either high or medium priority.

2. Optimize using a steepest search for a cycle.

3. If the solution is better, use it. Otherwise, discard it.

4. Repeat.

Random Circle Priorities (RCP)

1. Set all customers to medium priority.

2. Select a random customer and set it to high priority.

3. Set all customers within a given radius of that customer to high priority.

4. Optimize using a steepest search for a cycle.

5. If the solution is better, use it. Otherwise, discard it.

6. Repeat.

Random Routes (RR)

1. Set all customers to medium priority.

Dominik R. Rabiej 8

2. Select two dierent routes and set all of their customers to high priority.

3. Optimize using a steepest search for a cycle.

4. If the solution is better, use it. Otherwise, discard it.

5. Repeat.

Random Adjacent Routes (RAR)

1. Set all customers to medium priority.

2. Select a random customer and select another random customer within a given radius

that is on a dierent route than the rst customer. If there are no customers on

dierent routes within the radius, select a new rst customer.

3. Set the routes of the two selected customers to high priority.

4. Optimize using a steepest search for a cycle.

5. If the solution is better, use it. Otherwise, discard it.

6. Repeat.

Random Priorities Greedy Random (RPGR)

1. Randomly set all customers to high or medium priority.

2. Randomly reassign one customer from one vehicle route to another, dierent, vehicle

route.

3. Re-optimize the routes.

4. Set the customer moved to medium priority (so that it cannot be moved back by the

CVRTW Engine).

5. Optimize using a greedy search for a cycle.

6. If the solution is better, use it. Otherwise, discard it. If the solution is infeasible,

discard it.

7. Repeat.

Repetitive Steepest Search (RSS)

1. Set all customers to high priority.

2. Optimize using a steepest search for 16 of a cycle.

3. Set the priority of the moved customers to medium (so that they cannot be moved

again by the CVRTW Engine).

Dominik R. Rabiej 9

5. Optimize using a steepest search for 12 of a cycle.

6. If the solution is better, use it. Otherwise, discard it.

7. Repeat.

ANY Algorithm (ANY)

1. Randomly select a algorithm.

2. Use the selected algorithm to drive the CVRTW Engine for one cycle.

3. Repeat.

function in the CVRTW Engine encapsulated each of the ten algorithms. The CVRTW

Dominik R. Rabiej 10

Engine initialized itself using a pre-computed solution with parameters. These parameters

specied information such as the algorithm, the cycle time limit and the number of cycles

to run.

The selected algorithm drove the CVRTW Engine for the specied number of cycles. In

each cycle, the algorithm performed its logic sequence (Sections 2.4 and 2.5) and invoked the

CVRTW Engine to optimize the solution. The CVRTW Engine optimized until it reached

the cycle time limit. Then, it returned the possibly optimized solution to the algorithm. The

algorithm evaluated whether to use the solution or discard it. This process repeated until

the CVRTW Engine reached the specied number of cycles.

Because this study produced a large volume of data (in excess of 200 MB), a data analyzer

programmed in C++ parsed through the log les and generated summary tables containing

the data. These summary tables were HTML les with hyperlinks that exposed more data

underneath. Each average was a hyperlink to its components, and each of the components

contained detailed run information about itself.

Once the experiments had collected all the data, MiniTab 13 read in the data les and

performed statistical analysis. MiniTab analyzed the data using a General Linear Model

ANOVA with Tukey pairwise comparisons with a 95% condence interval to determine if

the results were statistically signicant [8]. Vehicles and Distance served as the response and

Algorithm and Benchmark formed the model.

Dominik R. Rabiej 11

3 Experiments

3.1 Algorithm Comparison Run

The rst experiment, the Algorithm Comparison Run (ACR), compared the ten algorithms

on the eight Solomon benchmarks (RC101{RC108). Each benchmark had three dierent

starting points, Rank 0, Rank 10, and Rank 20. The goal of the ACR was to determine

which algorithm optimized best.

In the ACR, each algorithm ran twice for 30 minutes on the three dierent ranks of

RC101{RC108. Each algorithm ran for 30 cycles of 60 seconds, except for HI, which ran for

one cycle of 1800 seconds (because it was a continual greedy search) and RSS, which ran for

15 cycles of 120 seconds each (because it did two separate searches in one cycle).

Rank Algorithm Vehicles Distance

1 GR 12.81 1380

2 ANY 13.13 1398

3 RSS 13.33 1393

4 SC 13.38 1398

5 RPGR 13.44 1447

6 RR 13.50 1456

7 RP 13.54 1405

8 RAR 13.65 1415

9 RCP 13.99 1457

10 HI 14.65 1572

Table 1: The overall averaged algorithm rankings and results of the Algorithm Comparison

Run.

Table 1 illustrates the results of the ACR, averaged across benchmarks and ranks. GR

produced a lower average number of vehicles than the other algorithms did. Statistical

analysis of the ACR shows that GR performed signicantly better than the other algorithms

at = 0 025.

p :

The four experiments after the ACR focused on understanding and analyzing GR. Each

experiment tested a hypothesis by comparing programmed variants of GR on a set of stan-

Dominik R. Rabiej 12

dardized benchmarks.

The Feasible/Infeasible Run (FIR) hypothesized that GR's success originated from its use

of infeasible space, a technique that no other algorithm had ever used before. By compar-

ing two variants of GR, the FIR determined the eect of feasibility on GR's optimization

performance.

The rst variant, Infeasible GR, only made infeasible initial random moves. Similarly,

Feasible GR only made feasible initial random moves.

The FIR ran twice for 30 minutes on rank 10 of the RC101{RC108. Both GR variants

ran for 30 cycles of 60 seconds each.

Benchmark Infeasible GR Feasible GR

RC101 15:1652 15:1681

RC102 13.5:1492 14:1500

RC103 11:1375 11:1365

RC104 10:1196 10:1200

RC105 14:1566 14:1568

RC106 12:1431 12:1437

RC107 11:1271 11:1259

RC108 11:1184 11:1182

Scores 12.1875:1396 12.25:1399

In Table 2 and for all subsequent tables, the notation signies vehicles :distance . For

example, 15:1652 means the optimized solution consisted of 15 vehicles traveling an aggregate

distance of 1652 units. Bold type denotes the lowest cost solution, not necessarily the

statistically signicantly best solution. Statistical analysis of the data in Table 2 showed

that there was no signicant dierence between Infeasible GR and Feasible GR using a 95%

condence interval. This suggested that feasibility was not the primary reason for GR's

success.

Dominik R. Rabiej 13

The FIR had shown that feasibility was not a contributing factor to GR's vehicle routing

optimization performance. The Variable Priorities Run (VPR) hypothesized that GR derived

its success from setting the moved customer to medium priority. This prevented the CVRTW

Engine from moving that customer and possibly undoing the initial random move. Three

variants of GR ran: High Priority GR, Medium Priority GR and Low Priority GR. High

Priority GR set the moved customer to high priority. Likewise, Medium Priority GR and

Low Priority GR set the moved customer to medium and low priority, respectively.

The 30-cycle VPR ran twice on rank 10 of Solomon's RC101{RC108 benchmarks. Each

cycle lasted 60 seconds.

Benchmark High Priority GR Medium Priority GR Low Priority GR

RC101 15:1654.64 15:1658.9 15:1677.89

RC102 13.5:1482.47 14:1498.4 13.5:1512.33

RC103 11:1332.01 11:1349.82 11:1411.98

RC104 10:1174.37 10:1200.25 10:1196.15

RC105 14:1542.38 14:1558.97 14:1592.72

RC106 12:1417.42 12:1435.99 12:1434.55

RC107 11:1261.19 11:1279.26 11:1301.31

RC108 11:1174.62 11:1184.82 11:1193.59

Scores 12.1875:1379.89 12.25:1395.8 12.1875:1415.07

Statistical analysis of the results in Table 3 showed that GR's optimization performance

increased with the priority of the moved customer. Because High Priority GR performed

signicantly better than Medium Priority GR in distance at = 0 025, the 90-cycle VPR

p :

ran to test whether High Priority GR's dominance existed merely because it considered more

possibilities and thus had a higher chance of improvement. If this were so, then its dominance

would dissipate with a longer total search time because Medium Priority GR would then be

able to consider more possibilities as well.

The 90-cycle VPR ran twice on rank 10 of Solomon's RC101{RC108 benchmarks. Each

Dominik R. Rabiej 14

Benchmark High Priority GR Medium Priority GR Low Priority GR

RC101 15:1635.11 15:1635.67 15:1666.38

RC102 13:1507.94 13:1512.66 13.5:1545.16

RC103 11:1299.75 11:1343.72 11:1361.31

RC104 10:1156.99 10:1162.48 10:1194.93

RC105 14:1545.7 14:1549.01 14:1568.98

RC106 12:1411.18 12:1410.08 12:1435.55

RC107 11:1253.69 11:1261.79 11:1287.66

RC108 11:1171.97 11:1150.01 11:1208.99

Scores 12.125:1372.79 12.125:1378.18 12.1875:1408.62

Statistical analysis of the results of the 90-cycle VPR (Table 4) showed that there was no

signicant dierence in either vehicles or distance between High Priority GR and Medium

Priority GR, but that both were signicantly better in distance than Low Priority GR at

p = 0 025. This conrmed the earlier hypothesis that GR derived success from not moving

:

the customer back immediately. High Priority GR performed better in the 30-cycle VPR

because it did not waste a cycle searching when the initial random move was fruitless. Unlike

Medium Priority GR, it undid the move and then reoptimized. When the initial random

move enabled the solution to be improved, High Priority GR optimized like Medium Priority

GR. In the 90-cycle VPR, Medium Priority GR had more time and thus fruitless searches did

not impact its eectiveness as much. Low Priority GR performed worse than both because it

restricted its search space. The pair of VPRs established that the modication of the priority

of the moved customer only served to control the number of possible solutions considered.

Otherwise, it did not form a core factor of GR's performance.

The Multiple Initial Random Moves Run (MIRMR) tested the hypothesis that GR's prowess

depended on the number of initial random moves. It posed the question of whether more

Dominik R. Rabiej 15

initial random moves would perform better than only one initial random move. The MIRMR

ran because neither the priority of the moved customer nor the feasibility or infeasibility of

the initial random move had been found to be factors.

The MIRMR tested 7 variants of GR that made 1, 2, 5, 7, 10, 25 and 50 initial random

moves. The 10, 25 and 50 move variants all had exactly the same performance as the 7 move

variant.

Benchmark 1 Move 2 Moves 5 Moves 7 Moves

RC101 15:1655.93 15:1663.59 15:1718.99 15:1718.99

RC102 13.5:1502.3 14:1516.82 14:1552.24 14:1554.07

RC103 11:1364.89 11:1379.72 11:1411.98 11:1411.98

RC104 10:1200.53 10:1196.61 10:1200.53 10:1200.53

RC105 14:1563.97 14:1570.9 14:1647.17 14:1647.17

RC106 12:1437.89 12:1437.89 12:1437.89 12:1437.89

RC107 11:1274.74 11:1296.28 11:1306.35 11:1306.35

RC108 11:1165.93 11:1187.78 11:1217.63 11:1217.63

Scores 12.1875:1395.77 12.25:1406.2 12.25:1436.6 12.25:1436.83

Table 5: The averaged results of the Multiple Initial Random Moves Run.

Statistical analysis of the results of the MIRMR (Table 5) indicated that there was no

signicant dierence between the 1 and 2 move variant, but that both were signicantly

better than the other move variants at = 0 025. This indicated a strong correlation: the

p :

less initial random moves, the better the performance. However, this did not mean that

no initial random moves needed to be made. The HI algorithm in the ACR tested that

possibility; it performed signicantly worse than the other nine algorithms at = 0 025.

p :

The Steepest Greedy Random Run (SGRR) ran to conrm the hypothesis that GR derived

its success from one initial random move. The SGRR accomplished this by running small

\mini-cycles" of steepest searches (analogous to the CVRTW Engine's greedy moves).

The SGRR ran on RC101{RC108 with a 30 minute search time. Within the search, the

Dominik R. Rabiej 16

steepest search ran with a constant mini-cycle time of two seconds. In between cycles, the

solution was evaluated as it had been in normal GR. SGRR compared three dierent variants

of Steepest GR, one with 30 cycles of 60 seconds each, one with 60 cycles of 30 seconds each

and one with 90 cycles of 20 seconds each.

Benchmark 30 Cycles 60 Cycles 90 Cycles

RC101 15:1653.84 15:1663.25 15:1667.31

RC102 14:1501.32 13.5:1524.9 13:1566.93

RC103 11:1349.15 11:1375.31 11:1363.63

RC104 10:1200.53 10:1194.98 10:1193.01

RC105 14:1550.01 14:1555.7 14:1566.72

RC106 12:1437.03 12:1429.18 12:1433.81

RC107 11:1291.86 11:1278.44 11:1263.91

RC108 11:1184.62 11:1166.06 11:1178.18

Scores 12.25:1396.04 12.1875:1398.48 12.125:1404.19

Statistical analysis of the results in Table 6 showed that there was no signicant dier-

ence between the three variants using a 95% condence interval. The number of vehicles

diminished as more cycles were run. Also, Steepest GR did not perform signicantly dier-

ently from GR even though it used a steepest search. This conrmed the hypothesis that

GR derived its success from its initial random move.

Finally, the best results found by any variant of GR were compared to the best results ever

found and also to the best results of previous research with HuGSS [1].

Statistical analysis of the data in Table 7 showed that HuGSS was signicantly better

than GR at = 0 025. However, on RC101 and RC107, GR was not signicantly dierent

p :

than HuGSS using a 95% condence interval. GR performed well on RC101 and RC107

because the customers in those solutions had narrow time windows. A smaller time window

allowed the CVRTW Engine to conduct a deeper search, nding more improvements.

Dominik R. Rabiej 17

RC101 15:1641 15:1662 14:1669

RC102 13:1478 12:1569 12:1555

RC103 11:1264 11:1224 11:1110

RC104 10:1156 10:1136 10:1136

RC105 14:1541 13:1691 13:1637

RC106 12:1391 11:1475 11:1432

RC107 11:1241 11:1236 11:1231

RC108 11:1135 10:1185 10:1140

Scores 12.125:1367 11.63:1397 11.50:1364

Table 7: A comparison of GR's best results, HuGSS's best results and the best results ever

found. [1]

RC104 and RC107. It optimized to within one vehicle of the best solution ever found on all

eight benchmarks.

4 Discussion

4.1 The Role of Infeasible Space

Initial experimental results suggested that GR derived its success from its use of infeasible

space. GR's initial random move made the solution infeasible 85.9% of the time because the

moved customer was distant from all other customers on its new route. The vehicle servicing

that route could not travel to all of its customers within their time windows, resulting in

infeasibility. Still, when GR started from an infeasible initial random move, it found a new

feasible solution 88.1% of the time. Only 17.4% of those were improvements over the solution

prior to the initial random move. GR found 73.1% of its improvements by passing through

infeasible space. No other algorithm used infeasible space as extensively as GR.

Dominik R. Rabiej 18

GR derived its success from neither infeasibility nor priorities, but from the use of an initial

random move. This move catalyzed a solution and enabled GR to escape a non-optimal local

minimum and approach the optimal solution.

This catalytic initial random move could not be too drastic. Statistical analysis of GR

showed that there was a signicant dierence between the amount of infeasibility (the max-

imum lateness of the vehicles) for initial random moves that resulted in improvements com-

pared to those that did not. Specically, initial random moves that resulted in improvements

made signicantly less change in infeasibility at = 0 025 than those that did not result in

p :

improvements.

GR also reoptimized the solution after its catalytic initial random move. It did this using

the CVRTW Engine's greedy moves, or in the case of Steepest GR, the mini-cycle moves.

In the Algorithm Comparison Run (Section 3.1), GR had an average of 36.3 greedy moves

per cycle. In cases where GR made an improvement, the average was 47.3 greedy moves per

cycle. Compared to when it did not nd an improvement (33.8 greedy moves per cycle), GR

made 39.8% more moves when it made an improvement. GR made signicantly more moves

at = 0 025.

p :

GR made one catalytic initial random move that completely shifted its search space,

enabling it to nd improvements by either skirting along the edges of infeasible space or by

moving unexpectedly within feasible space.

It was precisely the non-drastic element of GR's catalytic initial random move that en-

abled it to optimize well. This catalytic initial random move was neither too drastic nor

too passive. Because it was appropriately moderate, GR could escape a non-optimal local

minimum and approach the optimal solution.

Dominik R. Rabiej 19

5 Conclusion

This study has created and analyzed Greedy Random (GR), a novel algorithm for vehicle

routing optimization. It provides evidence that GR derives its success from a single catalytic

initial random move that allows it to escape from a non-optimal local minimum and approach

the optimal solution. GR provides a high level of portability because it is a successful

algorithm separate from an optimization engine. Thus, industry can easily apply GR to

other areas of optimization such as manufacturing planning and chip layout.

6 Acknowledgements

My thanks to my mentor, Dr. Neal Lesh at the Mitsubishi Electric Research Laboratory

(MERL) for his insight and inspiration. I also thank everyone at MERL, in particular Dr.

Brian Mirtich and Mr. Erik Piip. I deeply appreciate the assistance of the Research Science

Institute alumni, especially Doug Heimburger, Justin Bernold and Boris Zbarsky. I also

am grateful to Dr. Daniel Milhako of Western Michigan University for his assistance with

statistical analysis. Finally, I am immensely grateful to my parents for their encouragement

and steadfast support.

Dominik R. Rabiej 20

References

[1] D. Anderson, E. Anderson, N. Lesh, J. Marks, B. Mirtich and D. Ratajczak, \Human-

Guided Simple Search." In 17th Nat. Conf. on Articial Intelligence: July 2000, pp.

209-21, 2000.

[2] M. Atallah, Ed., Algorithms and the Theory of Computation Handbook. CRC Press,

Boca Raton, FL, pp. 19-26, 1999.

[3] E. Baker, \An Exact Algorithm for the Time-Constrained Traveling Salesmen Problem."

Operations Research vol. 31, no. 5, Sept-Oct., pp. 938-945, 1983.

[4] J. Braklow, W. Graham, S. Hassler, K. Peck and W. Powell, \Interactive Optimization

Improves Service and Performance for Yellow Freight System." INTERFACES vol. 22,

no. 1, Jan-Feb., pp 147-172, 1992.

[5] B. De Backer, V. Furnon, P. Kilby, P. Prosser and P. Shaw, \Solving Vehicle Routing

Problems using Constraint Programming and Metaheuristics." Journal of Heuristics

Special Issue on Constraint Programming, July 1997.

[6] B. Garcia, J. Potvin and J. M. Rousseau, \A parallel implementation of the tabu search

heuristic for vehicle routing problems with time window constraints." Computers &

Operations Research vol. 21, no. 9, pp. 1025-1033, 1994.

[7] J. Homberger and H. Gehring, \Two Evolutionary Metaheuristics for the Vehicle Rout-

ing Problem with Time Windows." INFOR vol. 37, no. 3, Aug., pp. 297-317, 1999.

[8] D. Montgomery, Design and Analysis of Experiments. New York, John Wiley & Sons,

1984.

[9] P. Prosser and P. Shaw, \Study of Greedy Search with Multiple Improvement Heuris-

tics for Vehicle Routing Problems." University of Strathclyde Department of Computer

Science, Glasglow, Scotland. Research Report 96/201, Dec. 1996.

[10] Y. Rochat and E. Taillard, \Probabilistic Diversication and Intensication in Local

Search for Vehicle Routing." Journal of Heuristics vol. 1, pp. 147-167, 1995.

[11] M. Solomon, \Algorithms for the Vehicle Routing and Scheduling Problems with Time

Window Constraints." Operations Research vol. 35, no. 2, March-April, pp. 254-264,

1987.

[12] S. Thangiah, \Vehicle routing with time windows using genetic algorithms." Articial

Intelligence and Robotics Laboratory, Computer Science Department, Slippery Rock

University, Slippery Rock, PA. Technical Report, 1993.