Sie sind auf Seite 1von 8

NAME : Akash Rathore

COURSE : MBA

SEMESTER : Two

BOOK CODE: MBA-205

BOOK NAME : Operations Research

DATE OF SUBMISSION: 27-04-2018

Q-1 What is the linear programming model?


Ans-1 The analysis of problems in which a linear function of a number of variables is to be
minimized or maximized when those variables are subject to a number of restraints in the form
of linear inequalities. This technique has found its applications to important areas of blending
problems and diet problems. Oil refineries, chemical industries, steel industries and food
processing industry are also using linear programming with considerable success. In short, form
can be written as LP, a method to accomplish the best outcome from the situation designed in a
mathematical model which is represented by linear relationships. Problem-based on LP may be
defined as the problem of maximizing and minimizing a linear function subject to linear
constraints. Problems involving only two variables can be effectively solved by a graphical
technique which provides a pictorial representation of the solution.

Linear Programming, generally, is a method to achieve the best outcome (such as maximum
profit or lowest cost) is a mathematical model whose requirements are represented by linear
relationships. Linear Programming is a special case of mathematical programming. More
formally, linear programming is the technique for the optimization of a linear objective
function, subject to linear equality and linear inequality constraints. It’s feasible region is a
convex polytope, which is a set defined as the intersection of finitely many half spaces, each of
which is defined by a linear inequality. Its objective function is a real-valued affine (linear)
function defined on this polyhedron. A linear programming algorithm finds a point in the
polyhedron where this function has the smallest value if such a point exists.

Linear programs are problems that can be expressed in canonical form as

Maximize 𝒄𝒕 𝒙

Subject to Ax ≤b
And x≥0

Where x represents the vector of variables (to be determined), c and b are vectors of
coefficients, A is a known matrix of coefficients, and (.) 𝑡 is the matrix transpose. The
expression to be maximized or minimized is called the objective function. The inequalities
Ax ≤b and x≥0 are the constraints which specify a convex polytope over which the objective
function is to be optimized. In this context, two vectors are comparable when they have the
same dimensions. If every entry in the first is less-than or equal-to the corresponding entry in
the second, then it can be said that the first vector is less-than or equal-to the second war.

Linear programming can be applied to various fields of study. It is widely used in mathematics,
and to a lesser extent in business, economics, and for some engineering problems. Industries
that use linear programming models include transportation, energy, telecommunications and
manufacturing. It has proven useful in modelling diverse types of problems in planning, routing,
scheduling assignment and design.

Standard form is the usual and most intuitive form of describing a linear programming problem.
It consists of the following three parts:

A linear function to be maximized: e.g. f(𝑥1 , 𝑥2 )= 𝑐1 𝑥1 + 𝑐2 𝑥2

Problem constraints of the following form e.g. 𝑎11 𝑥1 + 𝑎12 𝑥2 ≤ 𝑏1

𝑎21 𝑥1 + 𝑎22 𝑥2 ≤ 𝑏2

𝑎31 𝑥1 + 𝑎32 𝑥2 ≤ 𝑏3

Non negative variables: e.g. 𝑥1 ≥ 0

𝑥2 ≥ 0

The problem is usually expressed in matrix form , and then becomes:

Max{𝑐 𝑡 𝑥|𝐴𝑥 ≤ 𝑏 >∧ 𝑥 ≥ 0}

Other forms, such as minimization problems, problems with constraints on alternative forms, as
well as problems involving negative variables can always be rewritten into an equivalent
problem in standard form.

Q-2 What is penalty rule for artificial ariables?


Ans-2 An Artificial variables refers to a set of nonnegative variables temporarily added to a
linear program to obtain an initial basic feasible solution. The artificial variables must be driven
to zero to obtain a basic solution to original constraints. One way of doing this is to solve an
auxiliary linear program where the objective is to minimize the sum of the artificial variables.

LPP in which constraints may also have ≥ and = signs after ensuring that at all b 0 i≥ are
considered in this section. In such cases basis of matrix cannot be obtained as an identity matrix
in the starting simplex table, therefore we introduce a new type of variable called the artificial
variable.

If slack variables do not provide an initial basic feasible solution then the question may arise as
to how to start the initial table of simplex method and proceed. This is the case when the slack
variables have negative value. For example; let us consider a constraints 2x+3y> 15

The method of converting this inequality (with greater than equal to) into an equation, is to
subtract a variables so that we have 2x+3y-S=15

Now if x and y are non-basic variables in the problem, then S is taken as the starting basic
variable. But the value of S=-15 which is infeasible. We cannot proceed with the further
iteration of the simplex method with infeasible solution.

So to obtain a starting solution, we adopt the artificial variable technique. Two methods are
available using the artificial variables. These are:-

1. Big- M Method : If some of the constraints in the linear programming problem are of
the type (>) or (=), we have to use the M technique for maximization as well as
minimization of an objective function. The various steps of M technique are:-
 Express the given linear programming problem in the equation form by bringing
all the terms in the objective function to left hand side and the constraints are
also expressed in the equation form by including slack variables. (Add slack
variables for constraints of the type < and subtract – slack variable for
constraints of the type >). Now obtain a solution for the problem, which will be
an infeasible one as the basic variable is negative in the cases where the
constraints are of the type (>).
 To get a starting basic feasible solution, add non-negative variables to the left
hand side of each of the equations corresponding to the constraints of the type
(>) and (=). These variables are called artificial variables. Thus we change the
constraints to get a basic solution. This violates the corresponding constraints.
This is only for the starting purpose. But in the final solution (if it exists) if the
artificial variable will become non-basic, (their values will be zero) then we are
coming back to the original constraints. This method or driving the artificial
variables out of the basis is called the Big M technique. This result is achieved by
assigning a very large per unit penalty to these variables in the objective
function. Such a penalty will be a –M for maximization and a +M for
minimization problems, on the right hand side, the value of M being strictly
positive. By attaching these per unit penalities to the artificial variable we ensure
that they will never become the candidates for entering variables once they are
driven out.
 For the starting basic solution; use the artificial variables in the basis. Now the
starting table in the simplex procedure should not contain the terms involving
the basic variables, (one of the conditions to be satisfied by the simplex method).
But we will have the terms like +MA or –MA in a maximization or minimization
problem respectively in the left hand side of the objective row. In other words,
the objective function must be expressed in terms of non-basic variables only.
This lead us to have the coefficients of the artificial variable (starting basic
variables) equal to zero in objective row. This result is obtained by adding
suitable multiples of the constraint equations involving artificial variables to the
objective row.
 Proceed with the regular steps of the simplex method. If the artificial variables
leave the basis in the final solution, then we come back to the original problem.
But if any or all of the artificial variables do not leave this basis in the final
solution, then this indicates problem does not have solution.
2. Two-Phase Method : In phase 1, we form a new objective function by assigning zero to
every original variable (including slack and surplus variables) and -1 to each of the
artificial variables. Then we try to eliminate the artificial variables from the basis. The
solution at the end of Phase 1 serves as a basic feasible solution for phase 2. In phase 2,
the original objective function is introduced and the usual simplex algorithm is used to
find an optimal solution.
In Phase 1, all the right hand side variables are need to be non negative. They need to
be explained in standard form. Many cases may arise in Phase-1:
 Min. W > 0 and atleast one artificial variable appears in column “Basic
Variables” at positive level. In such case, no feasible solution exists for the
original LPP and the procedure is stopped.
 Min. W=0 and at least one artificial variables appears in the column “Basic
Variables” at zero level. In such a case, the optimum basic feasible solution to
the infeasibility form may or may not be a basic feasible solution to the given
(original) LPP. To obtain a basic feasible solution, we continue phase 1 and try to
drive all artificial variables out of the basis and then proceed to phase2.
 Min. W=0 and no artificial variables appears in the column “Basic Variables”
current solution. In such a case a basic feasible solution to the original LPP has
been found.

Q-3 Describe economic interpretation of the dual problem?


Ans-3 The optimum solution for a dual can be read from the final simplex table of the primal
problem and vice versa. At times it’s easier to solve the dual and read solution for the primal.
For instance, if we have a two variable minimization LPP, with four ≥ constraints, then for
solving the problem we will have to add eight constraints as surplus artificial variables.
However, for the maximization with all ≤ constraints we will have to add slack variables equal
in the number to constraints. Therefore, it will be computationally efficient to write the Dual of
the problem and solve it to read solution of the primal. The optimal value of objective function
is same for both primal and dual solutions.

The values of dual variables in the optimal solution represent the corresponding the shadow
prices or the worth of the constraints in the primal. In a production problem, the shadow prices
indicate which of the resources should be given priority. It is the one with the highest shadow
price. The resources with zero shadow prices are the under-utilized ones and they have spare
capacity.

For instance, Suppose there is an entrepreneur who wants to purchase all of furniture’s
resources i.e. 40 hours of labor, 210(ft)3 of lumber and 240(ft)2 of inventory space. Then he
must determine the price he is willing to pay for a unit of each of furniture’s resources. Let y1,
y2 and y3 be the price for one hour of labor, one cubic feet of lumber and one square feet of
inventory space. We show that the prices of resources should be determined by solving the
dual problem. The total price the entrepreneur must pay for the resources is
40y1+216y2+240y3 and since he wishes to minimize the cost of purchasing the resources, he
wants to minW=40y1+216y2+240y3. But he must be willing to pay enough to induce furniture
to sell its resources. For example; he must offer furniture at least $160 for a combination of
resources that includes 2 hours of labor, 18 cubic feet of lumber and 24 square feet of
inventory space, because furniture could use these resources to manufacture a table that can
be sold for $160. Since he is offered 2y1+18y2+24y3 for the resources used to produce a table,
he must choose y1, y2, y3 to satisfy 2y1+18y2+24y3≥160.

Q-4 What is the method of formulating integer programming problem?


Ans-4 An integer programming problem is a mathematical optimization or feasibility program in
which some or all of the variables are restricted to be integers. In many setting the term refers
to integer linear programming, in which the objective functions and the constraints are linear. If
some decision variables are not discrete the problem is known as mixed-integer programming.

Integer programming is the class of problems that can be expressed as the optimization of a
linear function subject to a set of linear constraints over integer variables. It is in fact NP-hard.
More important, perhaps, is the fact that the integer programs that can be solved to provable
optimality in reasonable time are much smaller in size than their linear programming
counterparts. There are exceptions, of course, and this documentation describes several
important classes of integer programs that can be solved efficiently, but users of OPL should be
warned that discrete programs are in general much harder to solve than linear programs.

1. Cutting Stock Problem : A sawmill produces standard boards which are 10 inches wide
and 1 inches long. It receives orders for 100 boards which are 2 inches wide, 150 boards
which are 3 inches wide and 80 boards which are 4 inches wide. All these boards are
required to be 1 inches in length. The saw mill wishes to determine how to meet these
orders so as to minimize total waste. Waste is defined as any leftover portion of a
standard board which cannot be used to meet demand. For example, if a standard
board is split into 3 inch boards, there will be leftovers having a width of 1 inch. Note
that this cutting pattern yields three inch boards from each standard board which is cut.
Because different cutting patterns can create multiple boards out of standard boards,
there is a possibility that a surplus or excess number will be cut.
2. Transportation Problem : The general transportation problem is concerned with
distributing any commodity from a group of supply centers, called sources, to any group
of receiving centers, called destinations, in such a way as to minimize the total cost of
distribution. Each source has a fixed supply of units, and this entire supply must be
distributed to the destinations.
3. Fixed Charge Problem : A linear programming problem in which each variable has fixed
charge coefficient in addition to the usual cost coefficient, the fixed charge is a non
linear function and is incurred only when the variable appears in the solution with a
positive level.

Q-5 What are the decision criteria under uncertainty?


Ans-5 Decision-making techniques are used to select the "best" alternatives under multiple and
often conflicting criteria. Multi criteria decision making (MCDM) necessitates to in-corporate
uncertainties in the decision-making process. The major thrust of this article is to extend the
framework proposed by Yager for multiple decision makers and fuzzy utilities (payoffs). In
addition, the concept of expert credibility factor is introduced. The proposed approach is
demonstrated for an example of seismic risk management using a heuristic hierarchical
structure. A step-by-step formulation of the proposed approach is illustrated using a
hypothetical example and a three-story reinforced concrete building.

When managers make choices or decisions under risk or uncertainty, they must somehow
incorporate this risk into their decision-making process. This chapter presented some basic
rules for managers to help them make decisions under conditions of risk and uncertainty.
Conditions of risk occur when a manager must make a decision for which the outcome is not
known with certainty. Under conditions of risk, the manager can make a list of all possible
outcomes and assign probabilities to the various outcomes. Uncertainty exists when a decision
maker cannot list all possible outcomes and/or cannot assign probabilities to the various
outcomes. To measure the risk associated with a decision, the manager can examine several
characteristics of the probability distribution of outcomes for the decision. The various rules for
making decisions under risk require information about several different characteristics of the
probability distribution of outcomes:

1. The expected value(or mean) of the distribution


2. The variance and standard deviation
3. The coefficient of variation

In our day-today conversation, we use the two terms ‘risk’ and ‘uncertainty’ synonymously.
Both imply ‘a lack of certainty’. But there is a difference between the two concepts. Risk can be
characterized as a state in which the decision-maker has only imperfect knowledge and
incomplete information but is still able to assign probability estimates to the possible outcomes
of a decision. These estimates may be subjective judgments, or they may be derived mathe-
matically from a probability distribution. Uncertainty is a state in which the decision-maker
does not have even the information to make subjective probability assessments.

It was Frank Knight who first drew a distinction between risk and uncertainty. Risk is objective
but uncertainty is subjective; risk can be measured or quantified but uncertainty cannot be.
Modern decision theory is based on this distinction. In general, two approaches are used to
estimate the probabilities of decision outcomes. The first one is deductive and it goes by the
name a priori measurement; the second one is based on statistical analysis of data and is called
a posteriori.

With the priori method, the decision-maker is able to derive probability estimates without
carrying out any real world experiment or analysis. For example, we know that if we toss an
unbiased coin, one of two equally likely outcomes (i.e., either head or tail) occur, and the
probability of each outcome is predetermined.
The a posteriori measurement of probability is based on the assumption that past is a true
representative (guide to) of the future. For example, insurance companies often examine
historical data in order to determine the probability that a typical twenty-five year-old male will
die, have an automobile accident, or incur a fire loss.

Thus the implication is that even though they cannot predict the probability that a particular
individual will have an accident, they can predict how many individuals in a particular age group
are likely to have an accident and then fix their premium levels accordingly.

By contrast, uncertainty implies that the probabilities of various outcomes are unknown and
cannot be estimated. It is largely because of these two characteristics that the decision-making
in an uncertain environment involves more subjective judgment. Uncertainty does not seem to
suggest that the decision-maker does not have any knowledge. Instead it implies that there is
no logical or consistent approach to assignment of probabilities to the possible outcomes.

1. Maximin: The maximin (or Wald) criterion is often called the criterion of pessimism. It is
based on the belief that nature is unkind and that the decision-maker therefore should
determine the worst possible outcome for each of the actions and select the one
yielding the best of the worst (maximin) results. That is, the decision-maker should
choose the best of the worst.
2. Maximax : An exactly opposite criterion is the maximax criterion. It is known as the
criterion of optimism because it is based on the assumption that nature is benevolent
(kind). Thus, this criterion is suitable to those who are particularly venturesome
(extreme risk takers). In direct contrast to the maximin criterion the maximax implies
selection of the alternative that is the “best of the best”. This is equivalent to assuming
with extreme optimism that the best possible outcome will always occur.
3. Hurwicz Alpha Index : The Hurwicz alpha criterion seeks to achieve a pragmatic
compromise between the two extreme criteria presented above. The focus is on an
index which is based on the derivation of a coefficient known as the coefficient of
optimism. Here the decision-maker considers both the maximum and the minimum
payoffs from each action and weighs these extreme outcomes in accordance with
subjective evaluations of either optimism or pessimism.

If, for instance, we assume that the decision-maker has a coefficient of 0.25 for a particular set
of actions, the implication is clear. He has implicitly assigned a probability of occurrence of 0.25
to the maximum payoff and of 0.75 to the minimum payoff.

Das könnte Ihnen auch gefallen