Sie sind auf Seite 1von 10

International Journal of Computational Intelligence and Information Security, February 2012 Vol. 3, No.

2


66

A Survey of Linearization Techniques for Nonlinear Models

Negar Jaberi
1
, Reza Rafeh
21


1,2
Department of Computer Engineering, Malayer Branch, Islamic Azad University, Malayer,
Iran
1
negar_jaberi2008@yahoo.com,
2
reza_rafeh@yahoo.com


Abstract
Constraint decision problems appear in many real life applications. Tackling such problems consists of two steps:
modelling and solving. In the modelling step, the problem must be formulated precisely. There are three well-
known solving techniques: mathematical methods, constraint programming techniques and local search.
Mathematical methods are divided into two broad categories: linear and nonlinear techniques. Although finding a
nonlinear model for the problem is more convenient than finding a linear model, there are more efficient
algorithms for solving linear models. Therefore, using linearization techniques to map a nonlinear model to an
equivalent linear model can benefit of both high-level modelling and efficient solving. In this paper we review
the most popular linearization methods.

Keywords: Constraint Decision Problems, Linear Equations, Nonlinear Equations, Linearization Methods,
Solvers.


International Journal of Computational Intelligence and Information Security, February 2012 Vol. 3, No. 2


67

1. Introduction

Constraint decision problems appear in many applications such as scheduling, planning, routing [1]. Such
problems are very important and many companies around the world are very interested in solving them because
of decreasing the cost and increasing the benefit [2].

However, constraint decision problems, in general, are NP-hard. In other words, there is no general and
efficient algorithm for solving them. The main problem is that their search space (i.e. the number of possible
choices) grow exponentially with the number of variables. When the size of the problem increases, finding a
solution for the problem may sometimes take years, even using a powerful computer [1].

Combinational and optimization problems can be divided into three categories:

1. Constraint Satisfaction Problems (CSP): Basically, a CSP is a problem composed of a finite set of
variables, each of which has a finite domain, and a set of constraints that limit the values that the
variables can simultaneously take. The task is to assign a value to each variable while satisfying all the
constraints [3].
2. Constraint Satisfaction Optimization Problems (CSOP) : In these problem, we do not want to find any
solution but a good solution. The quality of solution is usually measured by a function called objective
function. The goal is to find such solution that satisfies all the constraints and minimize or maximize the
objective function respectively.
3. Over-Constrained Problems: For some problems, it is impossible to find a valuation satisfying all the
constraints. Such problems are called over-constrained problems [4]. One often wants to find solutions
satisfying as much constraints as possible [3]. In this case the constraints are divided into two categories:
Hard constraints: These constraints must be satisfied.
Soft constraints: These constraints may be violated.
In over-constrained problems, final solution must satisfy all hard constraints and the maximum possible
number of soft constraints [5].

Solving a constraint decision problems requires the problem to be precisely and efficiently formulated.
Recent approaches to solve the problems are divided into two steps. The first step, is to specify a conceptual
model that provides a declarative specification regardless how to solve the problem really. It is called
modelling and the formula is called model (conceptual model). Modelling is often difficult and requires
considerable experience. The second step is to solve the problem in which a conceptual model is mapped to an
executable model, namely, design model. The most popular techniques for solving constraint decision problems
include three main areas: Constraint Programming, Mathematical Methods, Local Search [1].

As was told previously, a constraint decision problem is composed of a finite set of variables, a set of
constraints and also an objective function in the case it is an optimization problem. Based on type of variables,
constraints and objective function, models are divided into two groups: linear and nonlinear.

In the rest of paper, we introduce linear and nonlinear models and their solving methods. Then, we review
the most well-known linearization techniques.
2. Linear Models
2.1 Definition of a Linear Model
A linear programming problem (LP) is a class of mathematical programming problems. It consists of a
linear objective function z and a set of linear constraints (a system of simultaneous linear equations and/or
inequalities). We seek a set of values for continuous variables (x
1
, x
2
, , x
n
) that maximizes or minimizes the
objective function and satisfies the constraints. A standard form of an LP problem is expressed as follows:

Maximize z =
j
c
j
x
j
subject to
j
a
ij
x
j
b
i
(i = 1,2 , m)
x
j
0 (j = 1, 2, ,n)

where b
i
(i = 1, 2, , m) can be positive or negative [6].
International Journal of Computational Intelligence and Information Security, February 2012 Vol. 3, No. 2


68

2.2 Solving Methods for Linear Models
Mathematical methods, in recent decades, have been the techniques for solving optimization problems in
Operations Research (OR). Techniques of this group are Linear Programming (LP), Integer Programming (IP)
when all variables are restricted to be integer, and Mixed Integer Programming (MIP) variables may be integer or
real [1].
2.2.1 Solving Methods for Linear Programming (LP)
The solving methods for integer programming, which are in general based on simplex method
2

1. The simplex method for upper bounded variables: For variables with upper bound, u
j
, and lower
bound , l
j
: x
j
u
j
or l
j
x
j
. the upper and lower bound of the variable is implicitly changed to reduce the
problem size. Then, the problem is solved using new variables x
j
= u
j
x
j
or x
j
= x
j
l
j
.
2. The dual simplex method: This method is used in re-optimization of LP relaxations, when we want to get
a new solution by adding extra constraints to the current solution, without resolving the LP problem from
scratch. To do this, the new constraints are added to the current simplex tableau. Then, based on the right
hand-side values, the entering and leaving variables are specified and necessary changes are done until the
new solution is achieved. This method is applied to the IP algorithms such as branch-and-bound and
branch-and-cut as well.
3. The revised simplex method: In this method, the constraints and the objective function are multiplied by
the inverse basis matrix
3
, B
-1
, which gives us the ordinary simplex tableau. This tableau can be
simplified to the revised simplex one. Then, the revised simplex algorithm is applied to it.
The revised simplex algorithm is a version of simplex method, but it updates the revised tableau with m (
number of constraints) columns rather than the simplex tableau with m+n (numbers of constraints and
variables) columns. This method is very useful in the Branch-and-Bound algorithm.
2.2.2 Solving Methods for Integer Programming (IP) and Mixed Integer Programming (MIP)
Solving linear programming problems are easier than solving integer programming problems. However, the
simplex algorithm is the basis of solving IP, MIP and binary IP programs. If the IP has special structure,
sometimes, the solution of the LP relaxation is the solution of the IP. Otherwise, the following solution strategies
are recommended [6]:

1. Preprocessing: These techniques suggest a better formulation that is easier to solve. Such techniques are
used for tightening, fixing variables, and identifying redundant constraints and infeasibility in popular
integer programming.

The main idea of preprocessing is to reformulate the problem, so that the difference of the objective
function optimal value for the LP relaxation and IP is minimized. In fact, preprocessing introduces the
tighter constraints that reduce the problem size by a factor of 5 and runtime by a factor of 10; hence it is
better to use it before solving a MIP model.

2. Branch-and-Bound (B&B): Branch-and-bound can be viewed as a divide-and-conquer approach to
solving IP programs, so that a branching process is dividing and a bounding process is conquering. The
solving process can be shown via a B&B tree. The root node is the solution of the LP relaxation of
original IP problem. If the LP solution satisfies the integer requirement, the IP problem is solved.
Otherwise, it becomes the upper bound of IP solution and the root node is divided into two nodes by
branches. The branches that are defined by constraints and added to the problem, that have two properties:
a) They are the LP solutions that their fractional part are cut off. b) Their union results the same previous
integer feasible region. For example, if the solution is y
1
= 5.6, y
1
is selected as the variable that should be
branched. Therefore, two constraints, y
1
5 and y
1
6, are added into the LP relaxation and LP re-
optimization is done for that two nodes.

Branches that cannot result in a better solution than the best current one or have no feasible solution, are
systematically pruned. The algorithm is continued until the integer solution is found. This solution is
considered as a candidate optimum. After checking the tree and evaluating all branches, the candidate
solution that has maximum value of objective function (for maximization problem) is optimal.

2
Note: simplex supports linear constraints with only relational operators =, , and does not accept operators >, <, . It is also not
allowed arithmetic operator ^, div, mod, less and logical operator or, not and set operator for variables.
3
The mm coefficient matrix of basic variables is called a basis matrix.
International Journal of Computational Intelligence and Information Security, February 2012 Vol. 3, No. 2


69


, require
optimization and re-optimization of the relaxed version of the problem. There are three simplex-based methods
for solving linear programs [6]:
Branch-and-cut approach generalizes the branch-and-bound method. It is built upon branch-and-bound
framework and attempts to get tighter bounds with additional cuts generated before pruning and
branching. Three techniques are used to generate cuts from constraints: rounding, disjunction, and lifting.
concept of cuts, additional constraints that cut off the relaxed LP solution space while leaving the MIP
solution space unchanged.

Branch-and-price is also built upon branch-and-bound framework. B&C tighten the LP relaxation by
adding constraints (rows), however, B&P tighten the LP relaxation by generating a subset of profitable
columns of variables. The column generation approach is applied across the branch-and-bound tree, before
branching. Branching occurs when useful columns cannot be found and the LP solution does not satisfy
the integrality conditions.

The column generation approach is used to solve the problem that have too many columns. Therefore,
instead of considering all columns, the subset of columns (associated with the basic variables) are only
maintained and updated. Because, the most column associated with them have 0 value in optimal solution,
only profitable columns (associated with the non-basic variables) are generated and added to the problem
for improving the LP solution.

3. Heuristics to develop tighter lower bounds: Prior to or during use of a solution algorithm to solve the
MIP, if a tighter lower bound on the optimal value is known, searches can be limited to more promising
part of the relaxed LP feasible region. Heuristics find good solutions to the MIP, that is, solutions that are
a few percent of optimal. Such solutions can be accepted as good enough or can become the starting
point for an algorithm (e.g., branch-and-bound), reducing the algorithms number of iterations to
converge. There are three heuristic approaches: Local search, Tabu search, Genetic algorithms [6].

4. Relaxations to develop tighter upper bounds: A relaxation of a problem is a formulation that includes
the set of feasible solutions to the problem. The relaxation is used in traditional branch-and-bound search
in IP, so that branching occurs when fractional values violate the integrality requirements. It should (to be
useful) follow two conditions; 1) it should be faster to get optimality than the original problem, and 2) its
solution structure" should be similar to the original problem as closely as possible to provide strong
bounds. The most common relaxation is to formulate the original problem in a linear form, drops the
integrality requirements and solves it using linear programming methods [2].When a tighter upper bound
is used on a maximizing objective function for the MIP problems, an improved solution is achieved.
Relaxation approaches are to obtain an upper bound on the value of a maximizing MIP.

Furthermore, preprocessing, heuristics, and relaxations can be used at each node in branch-and-cut. This
illustrates that the general-purpose algorithms are used to control the overall MIP solution process and special-
purpose approaches are used to improve their overall effectiveness [6].

Mathematical methods are applied to continuous variables and are very efficient. But they can perform only
a limited form of constraints. So they are not appropriate for expressing the constraint decision problems [1].
3. Nonlinear Models
3.1 Definition of a Nonlinear Model
Nonlinear programs are written generally as follows:

Min f(x)
subject to c
i
(x) = 0 i
c
i
(x) 0 i I
l
i
x
i
u
i
i 1..n

where f and c
i
i I (the objective and constraint functions respectively) are functions mapping R
n
to R, x
R
n
represents the problem variables and l, u R
n
represent lower and upper bounds on x. is the set of equality
constraints and I is the set of inequality constraints [7].

International Journal of Computational Intelligence and Information Security, February 2012 Vol. 3, No. 2


70

3.2 Solving Methods for the Nonlinear Models
In this section, we review the most popular solving methods for nonlinear programs. Three first methods are
also used for linear models, but the remaining two methods are just used for nonlinear models.
3.2.1 Constraint Programming
Constraint programming uses two steps for solving constraint decision problems: search (for discovering the
search space), constraint propagation (for pruning the search space). Constraint programming supports different
types of constraints, hence, can easily express different extensive constraint decision problems. However, it
cannot efficiently run optimization problems, specially, when problem has continuous variables.
3.2.2 Local Search
Local search works by instantiating the search space. So, it can find good solutions at a short time by using a
good heuristic to guide the search. These techniques are typically based on move and are incomplete. Therefore,
they do not guarantee to find a solution. They use an evaluation function (heuristic) that considers the status of
assigning value to each variable to guide the search towards a solution. For example, evaluation function may be
the number of violated constraints (namely, conflict) [1].
3.2.3 Interior Point Methods
In interior point methods, an optimal solution is gained from the interior of the feasible region. The first step
is replacing inequality constraints and variable bounds by log barrier terms in the objective function. So, the
above NLP problem can be rewritten as:

min f(x)
n
i=1
ln(x
i
l
i
)
n
i=1
ln(u
i
x
i
)
subject to c
i
(x) = 0 i 1, 2, , m


The second step is to determine the first order optimality conditions. This is done using the Lagrangian
function:

L(x, , z ; ) = f(x)
n
i=1

i
c
i
(x)
n
i=1
ln(x
i
l
i
)
n
i=1
ln(u
i
x
i
)

The first order conditions for nonlinear programming can be solved using Newton's method, since they form
a large system of nonlinear equations.

3.2.4 Penalty Methods
Such techniques convert a constrained optimization problem into an unconstrained one. This is done by
adding any violation of the constraints to the objective function and removing the constraints. There are several
methods of penalizing the constraints. Here, we consider three methods: Quadratic Penalty, Log Barrier Penalty,
Exact Penalty.

The Quadratic Penalty Function is the simplest penalty method. The penalty terms are the squares of the
constraint violations, so that the above constrained nonlinear problem is now written as the following
unconstrained problem:

min Q(x ; ) = f(x) +
i
c
i
2
(x) +
iI
([c
i

(x)]

)
2


where > 0 is the penalty parameter and [c
i

(x)]

= max ( c
i

(x), 0). As tends to zero, the constraint
violation is penalized more severely. It can be proved that if the exact minimizer of Q(x;
k
) is found at each
iteration, or if the tolerance used to find the approximate minimizer tends to zero as k tends towards infinity then
any limit point of the sequence {x
k
} is a solution of the NLP problem.

The Logarithmic Barrier Penalty Function is the best choice for problems which only have inequality
constraints. In this method, the penalty terms are based on the natural logarithms of the constraints. So, the NLP
problem represented as an unconstrained minimization problem using the log barrier approach is:

min P(x; ) = f(x)
iI
ln c
i
(x),

where > 0 is the penalty parameter.

International Journal of Computational Intelligence and Information Security, February 2012 Vol. 3, No. 2


71

Under specified conditions, it can be proved that any sequence of approximate minimizers of P(x; )
converges to a minimizer of the NLP problem with the inequality constraints to be solved, as tends to zero.

There also exists a class of penalty functions, known as Exact Penalty Functions, for which only a single
minimization is required. One of these is the l
1
penalty function, and the problem, using it is :

min
1
(x; ) = f(x) +
i
|c
i

(x)| +
iI
[c
i

(x)]



And another is the Augmented Lagrangian function, that is an extension of the quadratic penalty function.
Note : Newton's method is used in the solution algorithms of Penalty methods for finding an approximate
minimizer to the penalty function.

3.2.5 Filter Methods
In these methods, a filter is chosen and a new point is accepted if it passes through the filter. The filter
comprises points which can be represented in two dimensions with an axis representing the value of the objective
function and the other representing the violation of the constraints. An acceptable point is one which is
improving the direction or lowering the value of the objective function or reducing the violation of the
constraints. When a point is accepted, it is added to the filter and any points in the filter which are dominated by
the new point are removed. A point is dominated by the other point if it has both a higher objective value and a
greater violation of the constraints than that point.

If a problem is feasible and has a minimum then the filter method converges to an optimal point, so that no
constraint is violated and the objective function cannot be improved further .

This method can be used with other methods in nonlinear solvers. For example, a nonlinear solver, IPOPT ,
uses a filter method incorporated into an interior point NLP solver [7].
4. Linearization Techniques
Although modelling with nonlinear equations is more convenient, (1) the nonlinear optimization problems
have some traps: function range violations (some value goes to ), multiple local optima, stationary points; (2)
the solution that is obtained by nonlinear solvers, is not necessarily optimal and very often, in order to get better
solutions, we have to run the model again with a fresh starting point which can be specified in the var declaration
and the solver will use this value as a starting guess [8].

Linear equations are more limited than nonlinear ones, because they must be stated by special forms, and
finding a linear model is difficult and sometimes is not possible. Besides, there are some linear solvers that offer
even optimal solutions in some problems. Then, for using benefits of linear solvers and unlimited modelling
ability of nonlinear models, nonlinear equations should be transformed to linear ones. In this section, we are
going to introduce some linearization methods.

Using 0-1(binary) variables to formulate optimization problems expands the applicability of MIP and
improves the modelling precision .0-1 variables can also be used to transform a variety of optimization models
into integer programs. Therefore, because of their abilities, we use them for modelling the following problems:

Logical (Boolean) expressions
Non-binary variables (discrete, integer)
Piecewise linear functions
Functions with products of 0-1 variables
Non-simultaneous constraints (either/or, if/then, p out of m, negation)

We will consider these problems, one by one, in each of the following sections.
4.1 Transform Logical (Boolean) Expressions
In some applications, using logical expressions to describe problem requirements may be easier and even a
more natural way than using mathematical expressions. That is, during the modelling process, our model may be
in the form of logical expressions rather than mathematical expressions that conform to the MIP assumptions.

International Journal of Computational Intelligence and Information Security, February 2012 Vol. 3, No. 2


72

A logical expression does not conform to the MIP format, because a variable in the MIP problem has to be
either a continuous variable that has any real value, or an integer variable that is restricted to be one of integral
values or binary values. But its true/false output can be considered to be equivalent with binary values. The
purpose of this section is to use 0-1 variables to transform logical relations into linear equations/inequalities to
adapt the MIP assumptions.

In the context of MIP, a statement may represent a binary variable, a linear constraint, or even a set of linear
constraints. Here, we must focus on the operations of Boolean variables, leaving logical operations on linear
constraints to the section of non-simultaneous constraints. Basic logical relations/operations of statements are:
Conjunction, Disjunction, Simple implication, Double implication, Negation.

Logical statement can take on only two values: true and false. It can be presented true by the value 1
and false by the value 0. Now, given the two binary variables, y
A
and y
B
for the two logical statements ,A and
B, and also a binary variable, y
C
for the outcome of logical operation of them , linear expression for boolean
relations are as follows :

Conjunction (A and B , A B)

y
C
y
A
y
C
y
B
y
C
y
A
+ y
B
1
y
C
= y
A
y
B

Disjunction (A or B , A B)

y
C
y
A
y
C
y
B
y
C
y
A
+ y
B
y
C
= y
A
y
B
Simple Implication (If A then B , A B)

y
A
y
B y
A
y
B


Double Implication (A if and only if B ,
A = B)

y
B
= y
A
y
A

y
B

Negation (not A , A)

y
C
= 1 y
A
y
C
= y
A

To obtain a correct MIP model, (1) only linear equations/inequalities are allowed, (2) if more than one linear
constraint is required, these must be satisfied simultaneously, and (3) the final value must be 1.

If there are two boolean operations performed on three binary variables, for instance, y
A
y
B
y
C
,then two
steps are required: (1) perform y
A
y
B
and keep its result in a new variable (say, y
D
), and (2) perform y
D
y
C

and evaluate the true value.
4.2 Transform Non-binary Variables to 0-1 Variable
There are two types of non-binary variables: a general integer variable that can take consecutive integer
values between 0 and infinity. A discrete variable that takes nonconsecutive integer values, for example, z Z =
{2, 5, 9, 21}. In other words, domain of variables are consecutive and nonconsecutive integer values.
Considering variable domains, we can transform them to the binary variables.
4.2.1 Transform Integer Variables
Some algorithms are applied only to problems with pure 0-1 variables. Consequently, any general integer
variable x0 with a finite upper bound can be converted to set of 0-1 variables. Assume the upper bound of x is
u. Then, it is required k+1 binary variables to satisfy 2
k
u < 2
k+1
. Taking log
2
on it, we have:

k log
2
u < k+1

International Journal of Computational Intelligence and Information Security, February 2012 Vol. 3, No. 2


73

where k and k+1,

respectively, are integers obtained by rounding down and rounding up the value of
log
2
.Therefore, we define x by below formula :

x = 2
0
y
0
+ 2
1
y
1
+2
2
y
2
+ + 2
k
y
k

Substituting this binary representation for each integer variable in the given IP problem will convert the
problem to a binary integer program but will increase the number of variables in the model. This increase may
become larger if the upper bound u of x is large. But the increase is not as fast as one might think.

Note that the choice of using 0-1 transformation is dependent on the problem. Transformation is useful when
there are a small number of general integer variables with low upper bounds and the proposed 0-1 algorithm is
more efficient than the existing general integer algorithm.
4.2.2 Transform Discrete Variables
A discrete variable does not conform to the MIP format, because it is limited to take only one value in a
given list. Hence, we can express it by a set of binary variables and solve it by adding linear constraints. For
example, the discrete variable z may take on only one value in the set Z = {4, 11, 7,19, 23}. To do this, we let a
new 0-1 variable y
i
(i =1, ,5) . If y
i
= 1, then the ith element in the set is selected. Afterward, we add the
following set of constraints to the problem:

z = 4y
1
+ 11y
2
+7y
3
+ 19y
4
+ 23y
5
y
1
+ y
2
+ y
3
+ y
4
+ y
5
= 1
y
i
= 0 or 1 for all i

The constraint,
i
= 1, is called a multiple choice constraint.
4.3 Transform Piecewise Linear Functions
The collection of some individual linear segments form a piecewise linear function with some breakpoints.
This cost function is nonlinear, even though all segments are linear. Therefore, the function must be converted to
a linear function so that the resulting model can be solved by an MIP algorithm.

Consider, two consecutive breakpoints a
k
and a
k+1
, and the line segment between them. If x is any point
upon the line segment with endpoints, a
k
and a
k+1
, 0
k
1, and f(x) is also the line segment between f(a
k
) and
f(a
k+1
) , then for all of r+1 breakpoints, we can write:

1
+
2
++
r+1
=1
x =
1
a
1
+
2
a
2
+ +
r+1
a
r+1
f(x) =
1
f(a
1
) +
2
f(a
2
) + +
r+1
f(a
r+1
)

where ,
k
0 for all k , and at most two adjacent
k
can be positive. The condition at most two adjacent
k
can
be positive is a nonmathematical expression, which must be converted to a mathematical expression. To do this,
we introduce a binary variable y
k
for each line segment and add the following set of linear constraints:


1
y
1

1
y
1
+ y
2

r
y
r-1
+ y
r

r+1
y
r



y
k
=0 or 1 ,
k
0 for all k

Note that each y
k
controls the value of
k
and
k+1
. Thus, if y
k
= 0,
k
and
k+1
is imposed to be 0. If y
k
= 1,
then
k
and
k+1
are in the range [0, 1]. The constraint also restricts all y
k
so that exactly one y
k
will
have the value 1. Consequently, exactly two adjacent
k
are allowed to be nonzero in any solution.

To utilize this technique, we replace x and function f(x) in the original objective and constraints with
continuous variables
k
and binary variables y
k
defined by the first two equations. At the last, we add the above
remaining constraints to the problem. Accordingly, the new problem contains only
k
and y
k
variables. After
solving this equivalent problem and finding the values of
k
and y
k
variables, the solution of original problem x
could be recovered by using the first equation.


International Journal of Computational Intelligence and Information Security, February 2012 Vol. 3, No. 2


74

4.4 Transform 0-1 Polynomial Functions
Consider a simple quadratic function with binary variables:

f(y
1
, y
2
, ,y
n
) =
2
+

Obviously, each y
j
2
can be replaced by y
j
without affecting the value of function. Also, a new variable y
jk
is
required to replace a product of

y
j
y
k
. Two linear constraints are added to specify lower and upper bounds of y
j
+
y
k
for all j,k :

2y
jk
y
j
+ y
k
y
jk
+ 1
y
j
, y
k
, y
jk
= 0 or 1


Therefore, higher degree functions can be generalized in a similar manner. In general, given a set, Q,
composed of index of q 0-1 variables, transforming of product
P
, for any positive integer value of p, to
linear expressions includes the following steps : (1) Dropping all positive exponents (y
j
p
= y
j
),

(2) Replacing
product term with a single variable y
Q
, and (3) Imposing the below additional constraints :

y
Q
+ (q 1) (1)
qy
Q
(2)
y
j
, y
Q
= 0 or 1 for j Q

Note that if any y
j
= 0, then constraint (1) is relaxed, constraint (2) becomes y
Q
0, and therefore y
Q
= 0. If
all y
j
= 1, then constraint (1) becomes y
Q
1, constraint (2) becomes y
Q
1, so the equality holds, y
Q
= 1, and
the desired relation is obtain.
4.5 Transform Non-simultaneous Constraints
In the MIP problems, all constraints must be satisfied simultaneously. So, non-simultaneous constraints
violate the assumption of simultaneousness. In this section, we present various types of non-simultaneous
constraints and show how to convert them to simultaneous constraints.
4.5.1 Either/Or Constraints
A decision variable may be defined by two disjunctive regions. For example, variable x is defined outside
the interval (3,10). That is, either x 3 or x10. To transform this constraint to the simultaneous, rewrite the pair
to x3 0 and x+10 0, and let M be a very big number such that M max { x3 , -x+10} and y be a binary
variable. Then, replace the disjunctive by two below simultaneous constraints:

x3 My
and -x +10 M(1y)

Note that if y =1, the constraint x3 M is redundant (and satisfied) and, the constraint x+10 0, is also
satisfied. If y =0, the constraint, x3 0, is satisfied, and the constraint -x+10 M is also redundant.
4.5.2 p Out of m Constraints Must Hold
Assume the case where the model has a set of m constraints but only some p out of m (assuming p < m)
constraints must hold. The problem is allowed to select any combination of p constraints, and want to select
which p constraints optimize the specified objective function. The remaining mp constraints that are not
selected are dropped from the problem, although feasible solutions might coincidently satisfy some of them. This
case is a generalization of the either/or case with m = 2 and p =1.

The formulation is similar to that of the either/or case. We let y
i
= 1 if constraint i is selected, and 0
otherwise. The constraints have this form : f
i
(x) b
i
0. In addition, the remaining mp constraints are not
considered by imposing the redundant constraints, f
i
(x) b
i
M. To satisfy these two conditions, we thus use the
constraints below:

f
i
(x) b
i
My
i
for i = 1, 2, , m
= mp

and y
i
is binary for all i.
International Journal of Computational Intelligence and Information Security, February 2012 Vol. 3, No. 2


75

4.5.3 Negation of a Constraint
Suppose the given constraint is f(x
1
,x
2
,, x
n
) b 0 or f(x) b 0, where b is the right-hand side
constraint. Then the negation of this constraint must be f(x) b > 0 or -f(x) + b < 0.
4.5.4 If/Then Constraints
The logical statement If A then B is equivalent to the logical statement A B .

In the context of MIP, we view a constraint as a statement. Constraints f
1
(x) b
1
0 and f
2
(x) b
2
0 are
viewed as statements A and B, respectively. Then the negation of f
1
(x) b
1
0 is f
1
(x) b
1
> 0 or -f
1
(x) + b
1
0
. Therefore, the simple implication constraint:

If f
1
(x) b
1
0 then f
2
(x) b
2
0

is equivalent to

Either -f
1
(x) + b
1
0 or f
2
(x) b
2
0

that is obtained by applying the same transformation rule to either/or constraints [6].
5. Conclusions
Constraint decision problems play a very important role in many areas as industry, education and planning.
Solving these problems is difficult and requires the problem to be precisely modeled. A problem can be modeled
using either linear equations or nonlinear equations. In this paper, we introduced linear and nonlinear models and
reviews their solving methods.

Modelling with nonlinear equations is more convenient, but solving nonlinear optimization problems is
more difficult than solving linear models. In addition, the solutions obtained by nonlinear solvers, are not
necessarily optimal. Linear equations are more limited because of solution algorithms, but there are many linear
solvers that find optimal solutions efficiently. Therefore, to benefit of 1) efficiency of linear solvers 2) high
formalism of nonlinear models, we must transform the nonlinear equations to equivalent linear equations. In this
paper, we introduced the most popular linearization methods.
6. References

[1] Rafeh, R., (2008), The Modelling Language Zinc, PHD Thesis, Clayton School of IT, Monash.

[2] Ottosson, G., (2000), Integration of Constraint Programming and Integer Programming for Combinatorial
Optimization, PHD Thesis, Computing Science Department, Uppsala

[3] Tsang, E., (1993), " Fondation Of Constraint Satisfaction", Academic Press Limited

[4] Bartk, R., (1999) Constraint Programming What is behind?, Constraint Programming for Decision and
Control pp. 7-15

[5] Milano, M. and Trick, M., (2004) "Constraint and Integer Programming", Kluwer Academic.

[6] Chen, D., and Batson, R., and Dang, Y, (2010), "Applied Integer Programming", WILEY.

[7] Buchanan, C., (2007), Techniques for solving Nonlinear Programming Problems with Emphasis on Interior
Point Methods and Optimal Control Problems., PHD Thesis, Mathematics and Statistics, Edinburgh

[8] Fourer, R., and Gay, D., and Kernighan, B., (1993) " AMPL: A Modelling Language for Mathematical
Programming", The Scientific Press.

Das könnte Ihnen auch gefallen