Sie sind auf Seite 1von 25

Assignment

Discrete Mathematics &


Design of Algorithms
COE 302
Amar Malik
217/COE/14
COE 1
Greedy Algorithms
What is a Greedy Algorithm?
A greedy algorithm is an algorithmic paradigm that follows the problem
solving heuristic of making the locally optimal choice at each stage with the
hope of finding a global optimum. In many problems, a greedy strategy does
not in general produce an optimal solution, but nonetheless a greedy
heuristic may yield locally optimal solutions that approximate a global
optimal solution in a reasonable time.
For example, a greedy strategy for the traveling salesman problem (which is
of a high computational complexity) is the following heuristic: "At each stage
visit an unvisited city nearest to the current city". This heuristic need not find
a best solution, but terminates in a reasonable number of steps; finding an
optimal solution typically requires unreasonably many steps. In mathematical
optimization, greedy algorithms solve combinatorial problems having the
properties of matroids.
In general, greedy algorithms have five components:

A candidate set, from which a solution is created

A selection function, which chooses the best candidate to be added to


the solution

A feasibility function, that is used to determine if a candidate can be


used to contribute to a solution

An objective function, which assigns a value to a solution, or a partial


solution, and

A solution function, which will indicate when we have discovered a


complete solution

Greedy algorithms produce good solutions on some mathematical problems,


but not on others. Most problems for which they work will have two
properties:
Greedy choice property
We can make whatever choice seems best at the moment and then solve the
subproblems that arise later. The choice made by a greedy algorithm may
depend on choices made so far, but not on future choices or all the solutions
to the subproblem. It iteratively makes one greedy choice after another,
reducing each given problem into a smaller one. In other words, a greedy
algorithm never reconsiders its choices. This is the main difference from
dynamic programming, which is exhaustive and is guaranteed to find the
solution.
After every stage, dynamic programming makes decisions based on all the
decisions made in the previous stage, and may reconsider the previous
stage's algorithmic path to solution.
Optimal substructure
A problem exhibits optimal substructure if an optimal solution to the problem
contains optimal solutions to the sub-problems.
Types
Greedy algorithms can be characterized as being short sighted, and also as
non-recoverable. They are ideal only for problems which have optimal
substructure. Despite this, for many simple problems (e.g. giving change),
the best suited algorithms are greedy algorithms. It is important, however, to
note that the greedy algorithm can be used as a selection algorithm to
prioritize options within a search, or branch-and-bound algorithm. There are a
few variations to the greedy algorithm:

Pure greedy algorithms

Orthogonal greedy algorithms

Relaxed greedy algorithms

Examples of Greedy Algorithms : Kruskals Algorithm for Minimum

Spanning Tree
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
struct Edge
{
int src, dest, weight;
};
struct Graph
{
int V, E;
struct Edge* edge;
};
struct Graph* createGraph(int V, int E)
{
struct Graph* graph = (struct Graph*) malloc( sizeof(struct
Graph) );
graph->V = V;
graph->E = E;
graph->edge = (struct Edge*) malloc( graph->E * sizeof( struct
Edge ) );
return graph;
}
struct subset
{
int parent;
int rank;
};

int find(struct subset subsets[], int i)


{
if (subsets[i].parent != i)
subsets[i].parent = find(subsets, subsets[i].parent);
return subsets[i].parent;
}
void Union(struct subset subsets[], int x, int y)

{
int xroot = find(subsets, x);
int yroot = find(subsets, y);
if (subsets[xroot].rank < subsets[yroot].rank)
subsets[xroot].parent = yroot;
else if (subsets[xroot].rank > subsets[yroot].rank)
subsets[yroot].parent = xroot;
else
{
subsets[yroot].parent = xroot;
subsets[xroot].rank++;
}
}
int myComp(const void* a, const void* b)
{
struct Edge* a1 = (struct Edge*)a;
struct Edge* b1 = (struct Edge*)b;
return a1->weight > b1->weight;
}
void KruskalMST(struct Graph* graph)
{
int V = graph->V;
struct Edge result[V];
int e = 0;
int i = 0;
qsort(graph->edge, graph->E, sizeof(graph->edge[0]), myComp);
struct subset *subsets =
(struct subset*) malloc( V * sizeof(struct subset) );
for (int v = 0; v < V; ++v)
{
subsets[v].parent = v;
subsets[v].rank = 0;
}
while (e < V - 1)
{
struct Edge next_edge = graph->edge[i++];
int x = find(subsets, next_edge.src);
int y = find(subsets, next_edge.dest);

if (x != y)
{
result[e++] = next_edge;
Union(subsets, x, y);
}
}
printf("Following are the edges in the constructed MST\n");
for (i = 0; i < e; ++i)
printf("%d -- %d == %d\n", result[i].src, result[i].dest,
result[i].weight);
return;
}
int main()
{
/* Let us create following weighted graph
10
0--------1
| \
|
6|
5\
|15
|
\ |
2--------3
4
*/
int V = 4;
int E = 5;
struct Graph* graph = createGraph(V, E);

// add edge 0-1


graph->edge[0].src = 0;
graph->edge[0].dest = 1;
graph->edge[0].weight = 10;
// add edge 0-2
graph->edge[1].src = 0;
graph->edge[1].dest = 2;
graph->edge[1].weight = 6;
// add edge 0-3
graph->edge[2].src = 0;
graph->edge[2].dest = 3;
graph->edge[2].weight = 5;
// add edge 1-3
graph->edge[3].src = 1;

graph->edge[3].dest = 3;
graph->edge[3].weight = 15;
// add edge 2-3
graph->edge[4].src = 2;
graph->edge[4].dest = 3;
graph->edge[4].weight = 4;
KruskalMST(graph);
return 0;
}

Examples of Greedy Algorithms : Prims Algorithm for Minimum


Spanning Tree
#include<iostream>
#include<climits>
#define V 4
using namespace std;
int minKey(int u,int graph[V][V],bool mstSet[])
{
int min=INT_MAX,min_index;
for(int v=0;v<V;v++)
{
if(mstSet[v]==false&&graph[u][v]&&graph[u][v]<min)
{
min=graph[u][v];
min_index=v;
}
}
return min_index;
}
int printMST(int parent[], int n, int graph[V][V])
{
int tsp=0;
for(int i=0;i<V;i++)
{cout<<parent[i]<<"-"<<i<<" "<<graph[i][parent[i]]<<endl;
tsp+=graph[i][parent[i]];}
cout<<"therefore minimum total distance is:\n"<<tsp;
}
void primMST(int graph[V][V])
{
int u=0;

int parent[V];//array to store constructed mst


bool mstSet[V];//to represent set of vertices not yet included
for(int v=0;v<V;v++)
{
mstSet[v]=false;
}
mstSet[0]=true;
for(int c=0;c<V-1;c++)
{
int v=minKey(u,graph,mstSet);
mstSet[v]=true;
parent[v]=u;
u=v;
}
parent[0]=u;
printMST(parent, V, graph);
}
int main()
{
int graph[V][V]={{0,10,15,20},
{10,0,35,25},
{15,35,0,30},
{20,25,30,0}};
primMST(graph);
return 0;
}

Optimal Merge Patterns & Huffman Coding


Huffman coding is a lossless data compression algorithm. The idea is to
assign variable-length codes to input characters, lengths of the assigned
codes are based on the frequencies of corresponding characters. The most
frequent character gets the smallest code and the least frequent character
gets the largest code.
The variable-length codes assigned to input characters are Prefix Codes,
means the codes (bit sequences) are assigned in such a way that the code
assigned to one character is not prefix of code assigned to any other
character. This is how Huffman Coding makes sure that there is no ambiguity
when decoding the generated bit stream.
There are mainly two major parts in Huffman Coding:
1) Build a Huffman Tree from input characters.

2) Traverse the Huffman Tree and assign codes to characters.


Steps to build Huffman Tree
Input is array of unique characters along with their frequency of occurrences
and output is Huffman Tree.
1. Create a leaf node for each unique character and build a min heap of all
leaf nodes (Min Heap is used as a priority queue. The value of frequency field
is used to compare two nodes in min heap. Initially, the least frequent
character is at root)
2. Extract two nodes with the minimum frequency from the min heap.
3. Create a new internal node with frequency equal to the sum of the two
nodes frequencies. Make the first extracted node as its left child and the
other extracted node as its right child. Add this node to the min heap.
4. Repeat steps#2 and #3 until the heap contains only one node. The
remaining node is the root node and the tree is complete.
#include <bits/stdc++.h>
using namespace std;

struct MinHeapNode{
char data;
unsigned freq;
MinHeapNode *left, *right;
MinHeapNode(char data, unsigned freq)
{
left = right = NULL;
this->data = data;
this->freq = freq;
}
};
struct compare{
bool operator()(MinHeapNode* l, MinHeapNode* r){

return (l->freq > r->freq);


}
};
void printCodes(struct MinHeapNode* root, string str){
if (!root)
return;
if (root->data != '$')
cout << root->data << ": " << str << "\n";
printCodes(root->left, str + "0");
printCodes(root->right, str + "1");
}
void HuffmanCodes(char data[], int freq[], int size){
struct MinHeapNode *left, *right, *top;
priority_queue<MinHeapNode*, vector<MinHeapNode*>, compare> minHeap;
for (int i = 0; i < size; ++i)
minHeap.push(new MinHeapNode(data[i], freq[i]));

while (minHeap.size() != 1){


left = minHeap.top();
minHeap.pop();
right = minHeap.top();
minHeap.pop();
top = new MinHeapNode('$', left->freq + right->freq);
top->left = left;
top->right = right;
minHeap.push(top);
}

printCodes(minHeap.top(), "");
}
int main(){
char arr[] = { 'a', 'b', 'c', 'd', 'e', 'f' };
int freq[] = { 5, 9, 12, 13, 16, 45 };
int size = sizeof(arr) / sizeof(arr[0]);
HuffmanCodes(arr, freq, size);
return 0;
}

Time complexity: O(nlogn) where n is the number of unique characters. If


there are n nodes, extractMin() is called 2*(n 1) times. extractMin() takes
O(logn) time as it calles minHeapify(). So, overall complexity is O(nlogn). If
the input array is sorted, there exists a linear time algorithm.
Optimal Merge Pattern
We have a set of files of various sizes to be merged. In what order and
combinations should we merge them? The solution to this problem is
basically the same as the Huffman algorithm - a merge tree is constructed
with the largest file at its root.

Single Source Shortest Path using Greedy Approach


Dijkstras Algorithm
Given a graph and a source vertex in graph, find shortest paths from source
to all vertices in the given graph.
Dijkstras algorithm is very similar to Prims algorithm for minimum spanning
tree. Like Prims MST, we generate a SPT (shortest path tree) with given
source as root. We maintain two sets, one set contains vertices included in
shortest path tree, other set includes vertices not yet included in shortest
path tree. At every step of the algorithm, we find a vertex which is in the
other set (set of not yet included) and has minimum distance from source.
Below are the detailed steps used in Dijkstras algorithm to find the shortest
path from a single source vertex to all other vertices in the given graph.
1) Create a set sptSet (shortest path tree set) that keeps track of vertices
included in shortest path tree, i.e., whose minimum distance from source is
calculated and finalized. Initially, this set is empty.

2) Assign a distance value to all vertices in the input graph. Initialize all
distance values as INFINITE. Assign distance value as 0 for the source vertex
so that it is picked first.
3) While sptSet doesnt include all vertices

Pick a vertex u which is not there in sptSet and has minimum distance
value.

Include u to sptSet.

Update distance value of all adjacent vertices of u. To update the


distance values, iterate through all adjacent vertices. For every
adjacent vertex v, if sum of distance value of u (from source) and
weight of edge u-v, is less than the distance value of v, then update
the distance value of v.

#include <stdio.h>
#include <limits.h>
#define V 9
int minDistance(int dist[], bool sptSet[]){
int min = INT_MAX, min_index;
for (int v = 0; v < V; v++)
if (sptSet[v] == false && dist[v] <= min)
min = dist[v], min_index = v;
return min_index;
}
int printSolution(int dist[], int n){
printf("Vertex

Distance from Source\n");

for (int i = 0; i < V; i++)


printf("%d \t\t %d\n", i, dist[i]);
}
void dijkstra(int graph[V][V], int src){
int dist[V];

bool sptSet[V];
for (int i = 0; i < V; i++)
dist[i] = INT_MAX, sptSet[i] = false;
dist[src] = 0;
for (int count = 0; count < V-1; count++){
int u = minDistance(dist, sptSet);
sptSet[u] = true;
for (int v = 0; v < V; v++)
if (!sptSet[v] && graph[u][v] && dist[u] != INT_MAX
&& dist[u]+graph[u][v] < dist[v])
dist[v] = dist[u] + graph[u][v];
}
printSolution(dist, V);
}
int main(){
int graph[V][V] = {{0, 4, 0, 0, 0, 0, 0, 8, 0},
{4, 0, 8, 0, 0, 0, 0, 11, 0},
{0, 8, 0, 7, 0, 4, 0, 0, 2},
{0, 0, 7, 0, 9, 14, 0, 0, 0},
{0, 0, 0, 9, 0, 10, 0, 0, 0},
{0, 0, 4, 14, 10, 0, 2, 0, 0},
{0, 0, 0, 0, 0, 2, 0, 1, 6},
{8, 11, 0, 0, 0, 0, 1, 0, 7},
{0, 0, 2, 0, 0, 0, 6, 7, 0}
};
dijkstra(graph, 0);

return 0;
}

Activity Selection Problem


You are given n activities with their start and finish times. Select the
maximum number of activities that can be performed by a single person,
assuming that a person can only work on a single activity at a time.
The greedy choice is to always pick the next activity whose finish time is least
among the remaining activities and the start time is more than or equal to
the finish time of previously selected activity. We can sort the activities
according to their finishing time so that we always consider the next activity
as minimum finishing time activity.
1) Sort the activities according to their finishing time
2) Select the first activity from the sorted array and print it.
3) Do following for remaining activities in the sorted array.
a) If the start time of this activity is greater than the finish time of
previously selected activity then select this activity and print it.
#include<stdio.h>
void printMaxActivities(int s[], int f[], int n){
int i, j;
printf ("Following activities are selected \n");
i = 0;
printf("%d ", i);
for (j = 1; j < n; j++) {
if (s[j] >= f[i]){
printf ("%d ", j);
i = j;
}
}
}
int main(){

int s[] =

{1, 3, 0, 5, 8, 5};

int f[] =

{2, 4, 6, 7, 9, 9};

int n = sizeof(s)/sizeof(s[0]);
printMaxActivities(s, f, n);
getchar();
return 0;
}

Let the give set of activities be S = {1, 2, 3, ..n} and activities be sorted by
finish time. The greedy choice is to always pick activity 1. How come the
activity 1 always provides one of the optimal solutions. We can prove it by
showing that if there is another solution B with first activity other than 1, then
there is also a solution A of same size with activity 1 as first activity. Let the
first activity selected by B be k, then there always exist A = {B {k}} U {1}.
(Note that the activities in B are independent and k has smallest finishing
time among all. Since k is not 1, finish(k) >= finish(1)).

Dynamic Programming
Dynamic programming is a method for solving a complex problem by
breaking it down into a collection of simpler sub-problems, solving each of
those sub-problems just once, and storing their solutions ideally, using a
memory-based data structure. The next time the same sub-problem occurs,
instead of recomputing its solution, one simply looks up the previously
computed solution, thereby saving computation time at the expense of a
modest expenditure in storage space. Each of the sub-problem solutions is
indexed in some way, typically based on the values of its input parameters,
so as to facilitate its lookup.) The technique of storing solutions to subproblems instead of recomputing them is called "memoization".
Dynamic programming algorithms are often used for optimization. A dynamic
programming algorithm will examine the previously solved sub-problems and
will combine their solutions to give the best solution for the given problem. In
comparison, a greedy algorithm treats the solution as some sequence of
steps and picks the locally optimal choice at each step. Using a greedy
algorithm does not guarantee an optimal solution, because picking locally
optimal choices may result in a bad global solution, but it is often faster to
calculate. Fortunately, some greedy algorithms (such as Kruskal's or Prim's
for minimum spanning trees) are proven to lead to the optimal solution. In
addition to finding optimal solutions to some problem, dynamic programming
can also be used for counting the number of solutions, or counting the
number of optimal solutions.

Examples of Dynamic Programming Algorithms

Dijkstras algorithm for the shortest path problem

Fibonacci sequence

Balanced 0-1 Matrix

Checkerboard

Sequence alignment

Tower of Hanoi puzzle

Egg dropping puzzle

Matrix chain multiplication

Traveling Salesman Problem


Given a set of cities and distance between every pair of cities, the problem is
to find the shortest possible route that visits every city exactly once and
returns to the starting point. The difference between Hamiltonian Cycle and
TSP is that the Hamiltoninan cycle problem is to find if there exist a tour that
visits every city exactly once. Here the problem is to find a minimum weight
Hamiltonian Cycle. The problem is a famous NP hard problem. There is no
polynomial time know solution for this problem.
The following is a naive solution for the traveling salesman problem:
1) Consider city 1 as the starting and ending point.
2) Generate all (n-1)! Permutations of cities.
3) Calculate cost of every permutation and keep track of minimum cost
permutation.
4) Return the permutation with minimum cost.
Time Complexity: ?(n!)
Dynamic Programming based solution
Let the given set of vertices be {1, 2, 3, 4,.n}. Consider 1 as starting and
ending point of output. For every other vertex i (other than 1), find the
minimum cost path with 1 as the starting point, i as the ending point and all
vertices appearing exactly once. Let the cost of this path be cost(i), the cost
of corresponding Cycle would be cost(i) + dist(i, 1) where dist(i, 1) is the
distance from i to 1. Finally, return the minimum of all [cost(i) + dist(i, 1)]
values.

To calculate cost(i) using Dynamic Programming, we need to have some


recursive relation in terms of sub-problems. Define a term C(S, i) be the cost
of the minimum cost path visiting each vertex in set S exactly once, starting
at 1 and ending at i.
Start with all subsets of size 2 and calculate C(S, i) for all subsets where S is
the subset, then calculate C(S, i) for all subsets S of size 3 and so on. Note
that 1 must be present in every subset.
For a set of size n, we consider n-2 subsets each of size n-1 such that all
subsets dont have nth in them.
Using the above recurrence relation, we can write dynamic programming
based solution. There are at most O(n*2n) sub-problems, and each one takes
linear time to solve. The total running time is therefore O(n2*2n). The time
complexity is much less than O(n!), but still exponential. Space required is
also exponential. So this approach is also infeasible even for slightly higher
number of vertices.
#include<stdio.h>
#include<conio.h>
int a[10][10],visited[10],n,cost=0;
void get(){
int i,j;
printf("Enter No. of Cities: ");
scanf("%d",&n);
printf("\nEnter Cost Matrix\n");
for(i=0;i < n;i++){
printf("\nEnter Elements of Row # : %d\n",i+1);
for( j=0;j < n;j++)
scanf("%d",&a[i][j]);
visited[i]=0;
}
printf("\n\nThe cost list is:\n\n");
for( i=0;i < n;i++){
printf("\n\n");

for(j=0;j < n;j++)


printf("\t%d",a[i][j]);
}
}
void mincost(int city){
int i,ncity;
visited[city]=1;
printf("%d -->",city+1);
ncity=least(city);
if(ncity==999){
ncity=0;
printf("%d",ncity+1);
cost+=a[city][ncity];
return;
}
mincost(ncity);
}
int least(int c){
int i,nc=999;
int min=999,kmin;
for(i=0;i < n;i++){
if((a[c][i]!=0)&&(visited[i]==0))
if(a[c][i] < min)
{
min=a[i][0]+a[c][i];
kmin=a[c][i];
nc=i;

}
}
if(min!=999)
cost+=kmin;
return nc;
}
void put(){
printf("\n\nMinimum cost:");
printf("%d",cost);
}
void main(){
clrscr();
get();
printf("\n\nThe Path is:\n\n");
mincost(0);
put();
getch();
}

0-1 Knapsack problem


Given weights and values of n items, put these items in a knapsack of
capacity W to get the maximum total value in the knapsack. In other words,
given two integer arrays val[0..n-1] and wt[0..n-1] which represent values
and weights associated with n items respectively. Also given an integer W
which represents knapsack capacity, find out the maximum value subset of
val[] such that sum of the weights of this subset is smaller than or equal to W.
You cannot break an item, either pick the complete item, or dont pick it (0-1
property).
#include<stdio.h>
int max(int a, int b) { return (a > b)? a : b; }

int knapSack(int W, int wt[], int val[], int n){


int i, w;
int K[n+1][W+1];
for (i = 0; i <= n; i++){
for (w = 0; w <= W; w++){
if (i==0 || w==0)
K[i][w] = 0;
else if (wt[i-1] <= w)
K[i][w] = max(val[i-1] + K[i-1][w-wt[i-1]],

K[i-1]

[w]);
else
K[i][w] = K[i-1][w];
}
}
return K[n][W];
}
int main(){
int val[] = {60, 100, 120};
int wt[] = {10, 20, 30};
int

W = 50;

int n = sizeof(val)/sizeof(val[0]);
printf("%d", knapSack(W, wt, val, n));
return 0;
}

Time Complexity: O(nW) where n is the number of items and W is the


capacity of knapsack.

Difference Between Fractional Knapsack Problem and


0-1 Knapsack Problem
In theoretical computer science, the continuous knapsack problem (also
known as the fractional knapsack problem) is an algorithmic problem in
combinatorial optimization in which the goal is to fill a container (the
"knapsack") with fractional amounts of different materials chosen to
maximize the value of the selected materials. It resembles the classic
knapsack problem, in which the items to be placed in the container are
indivisible; however, the continuous knapsack problem may be solved in
polynomial time whereas the classic knapsack problem is NP-hard. It is a
classic example of how a seemingly small change in the formulation of a
problem can have a large impact on its computational complexity.
The difference between this problem and the fractional one is that you can't
take a fraction of an item. You either take the whole thing or none of it.

Single Source Shortest Path


In graph theory, the shortest path problem is the problem of finding a path
between two vertices (or nodes) in a graph such that the sum of the weights
of its constituent edges is minimized.
The most important algorithms for solving this problem are:

Dijkstra's algorithm solves the single-source shortest path problem.

BellmanFord algorithm solves the single-source problem if edge


weights may be negative.

A* search algorithm solves for single pair shortest path using heuristics
to try to speed up the search.

FloydWarshall algorithm solves all pairs shortest paths.

Johnson's algorithm solves all pairs shortest paths, and may be faster
than FloydWarshall on sparse graphs.

Viterbi algorithm solves the shortest stochastic path problem with an


additional probabilistic weight on each node.

Bellman-Ford Algorithm
Given a graph and a source vertex src in graph, find shortest paths from src
to all vertices in the given graph. The graph may contain negative weight
edges.
Dijksras algorithm is a Greedy algorithm and time complexity is O(VLogV)
(with the use of Fibonacci heap). Dijkstra doesnt work for graphs with
negative weight edges, Bellman-Ford works for such graphs. Bellman-Ford is

also simpler than Dijkstra and suites well for distributed systems. But time
complexity of Bellman-Ford is O(VE), which is more than Dijkstra.
Like other dynamic programming problems, the algorithm calculate shortest
paths in bottom-up manner. It first calculates the shortest distances for the
shortest paths which have at-most one edge in the path. Then, it calculates
shortest paths with at most 2 edges, and so on. After the ith iteration of outer
loop, the shortest paths with at most i edges are calculated. There can be
maximum |V| 1 edges in any simple path, that is why the outer loop runs |v|
1 times. The idea is, assuming that there is no negative weight cycle, if we
have calculated shortest paths with at most i edges, then an iteration over all
edges guarantees to give shortest path with at-most (i+1) edges.
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <limits.h>
struct Edge{
int src, dest, weight;
};
struct Graph{
int V, E;
struct Edge* edge;
};
struct Graph* createGraph(int V, int E){
struct Graph* graph =
(struct Graph*) malloc( sizeof(struct Graph) );
graph->V = V;
graph->E = E;
graph->edge =
(struct Edge*) malloc( graph->E * sizeof( struct Edge ) );
return graph;
}

void printArr(int dist[], int n)


{
printf("Vertex

Distance from Source\n");

for (int i = 0; i < n; ++i)


printf("%d \t\t %d\n", i, dist[i]);
}
void BellmanFord(struct Graph* graph, int src){
int V = graph->V;
int E = graph->E;
int dist[V];
for (int i = 0; i < V; i++)
dist[i]

= INT_MAX;

dist[src] = 0;
for (int i = 1; i <= V-1; i++){
for (int j = 0; j < E; j++){
int u = graph->edge[j].src;
int v = graph->edge[j].dest;
int weight = graph->edge[j].weight;
if (dist[u] != INT_MAX && dist[u] + weight < dist[v])
dist[v] = dist[u] + weight;
}
}
for (int i = 0; i < E; i++){
int u = graph->edge[i].src;
int v = graph->edge[i].dest;
int weight = graph->edge[i].weight;
if (dist[u] != INT_MAX && dist[u] + weight < dist[v])

printf("Graph contains negative weight cycle");


}
printArr(dist, V);
return;
}
int main(){
int V = 5;
int E = 8;
struct Graph* graph = createGraph(V, E);
graph->edge[0].src = 0;
graph->edge[0].dest = 1;
graph->edge[0].weight = -1;
graph->edge[1].src = 0;
graph->edge[1].dest = 2;
graph->edge[1].weight = 4;
graph->edge[2].src = 1;
graph->edge[2].dest = 2;
graph->edge[2].weight = 3;
graph->edge[3].src = 1;
graph->edge[3].dest = 3;
graph->edge[3].weight = 2;
graph->edge[4].src = 1;
graph->edge[4].dest = 4;
graph->edge[4].weight = 2;
graph->edge[5].src = 3;
graph->edge[5].dest = 2;
graph->edge[5].weight = 5;

graph->edge[6].src = 3;
graph->edge[6].dest = 1;
graph->edge[6].weight = 1;
graph->edge[7].src = 4;
graph->edge[7].dest = 3;
graph->edge[7].weight = -3;
BellmanFord(graph, 0);
return 0;
}

Multistage Graphs
A multistage graph G = (V,E) is a directed graph in which vertices are
partitioned into K>=2 disjoint set (set Vi) where [1<=i<=K}. In addition, if
(u,v) is an edge E then u vi, vi vi+1.
Let c(I,j) be the cost of edge (i,j). The cost of a path from (S to T) is the sum of
costs of the edges on the path. The multistage graph problem is to find the
minimum cost path from S to T. The value on the edges are called the
cost of the edges.
A dynamic programming solution to the multistage graph problem is as
follows :
Let path(i,j) be some specification of the minimal path from vertex j in set i to
vertex t; C(i,j) is the cost of this path; c(j,t) is the weight of the edge from j to
t.
C(i,j) = min

{ c(j,l) + C(i+1,l) }, i in vi+1 and (j,i) in E.

To write a simple algorithm, assign numbers to the vertices so those in stage


Vi have lower number those in stage Vi+1.
int[] MStageForward(Graph G)
{
int n = G.n (number of nodes);
int k = G.k (number of stages);
float[] C = new float[n];

int[]

D = new int[n];

int[]

P = new int[k];

for (i = 1 to n) C[i] = 0.0;


for j = n-1 to 1 by -1 {
r = vertex such that (j,r) in G.E and c(j,r)+C(r) is minimum
C[j] = c(j,r)+C(r);
D[j] = r;
}
P[1] = 1; P[k] = n;
for j = 2 to k-1 {
P[j] = D[P[j-1]];
}
return P;
}

Das könnte Ihnen auch gefallen