Sie sind auf Seite 1von 5

KarushKuhnTucker conditions

In mathematical optimization, the KarushKuhn a local optimum and the optimization problem satises
Tucker (KKT) conditions, also known as the Kuhn some regularity conditions (see below), then there exist
Tucker conditions, are rst-order necessary conditions constants i (i = 1, . . . , m) and j (j = 1, . . . , ) ,
for a solution in nonlinear programming to be optimal, called KKT multipliers, such that
provided that some regularity conditions are satised. Al-
lowing inequality constraints, the KKT approach to non-
linear programming generalizes the method of Lagrange
multipliers, which allows only equality constraints. The
system of equations and inequalities corresponding to the
KKT conditions is usually not solved directly, except in
the few special cases where a closed-form solution can
be derived analytically. In general, many optimization
algorithms can be interpreted as methods for numerically
solving the KKT system of equations and inequalities.[1]
The KKT conditions were originally named after Harold
W. Kuhn, and Albert W. Tucker, who rst published the
conditions in 1951.[2] Later scholars discovered that the
necessary conditions for this problem had been stated by
William Karush in his masters thesis in 1939.[3][4]

Inequality constraint diagram for optimization problems


1 Nonlinear optimization problem
Consider the following nonlinear minimization or maxi- Stationarity For maximizing f(x): f (x ) =
mization problem: m

i=1 i gi (x ) + j=1 j hj (x ),

m
For minimizing f(x): f (x ) = i gi (x ) +
f (x)
i=1
j=1 j hj (x ),
subject to
Primal feasibility gi (x ) 0, for i = 1, . . . , m

gi (x) 0, hj (x ) = 0, for j = 1, . . . ,

hj (x) = 0,
Dual feasibility i 0, for i = 1, . . . , m
where x is the optimization variable, f is the objective
or utility function, gi (i = 1, . . . , m) are the inequality Complementary slackness g (x ) = 0, for i =
i i
constraint functions, and hj (j = 1, . . . , ) are the equal- 1, . . . , m.
ity constraint functions. The numbers of inequality and
equality constraints are denoted m and , respectively.
In the particular case m = 0 , i.e., when there are no
inequality constraints, the KKT conditions turn into the
2 Necessary conditions Lagrange conditions, and the KKT multipliers are called
Lagrange multipliers.
Suppose that the objective function f : Rn R and the If some of the functions are non-dierentiable,
constraint functions gi : Rn R and hj : Rn R subdierential versions of KarushKuhnTucker
are continuously dierentiable at a point x . If x is (KKT) conditions are available.[5]

1
2 5 ECONOMICS

3 Regularity conditions (or con-


straint qualications) s 2xx L(x , , )s 0
If the above condition is strictly met, the function is a
In order for a minimum point x to satisfy the above strict constrained local minimum.
KKT conditions, the problem should satisfy some regu-
larity conditions; some common examples are tabulated
here: 5 Economics
( v1 , . . . , vn ) is positive-linear dependent if there ex-
a1 0, . . . , [7]
ists an 0 and an element vi such that See also: Prot maximization
j=i aj vj = vi .

It can be shown that Often in mathematical economics the KKT approach is


LICQMFCQCPLDQNCQ used in theoretical models in order to obtain qualitative
results. For example,[11] consider a rm that maximizes
and its sales revenue subject to a minimum prot constraint.
LICQCRCQCPLDQNCQ Letting Q be the quantity of output produced (to be cho-
sen), R(Q) be sales revenue with a positive rst derivative
(and the converses are not true), although MFCQ is and with a zero value at zero output, C(Q) be produc-
not equivalent to CRCQ.[8] In practice weaker constraint tion costs with a positive rst derivative and with a non-
qualications are preferred since they provide stronger negative value at zero output, and G be the positive
min
optimality conditions. minimal acceptable level of prot, then the problem is
a meaningful one if the revenue function levels o so it
eventually is less steep than the cost function. The prob-
4 Sucient conditions lem expressed in the previously given minimization form
is
In some cases, the necessary conditions are also sucient Minimize R(Q)
for optimality. In general, the necessary conditions are
not sucient for optimality and additional information is subject to
necessary, such as the Second Order Sucient Conditions Gmin R(Q) C(Q)
(SOSC). For smooth functions, SOSC involve the second Q 0,
derivatives, which explains its name.
and the KKT conditions are
The necessary conditions are sucient for optimality if
the objective function f of a maximization problem is
a concave function, the inequality constraints gj are con- (dR/dQ)(1 + ) (dC/dQ) 0,
tinuously dierentiable convex functions and the equality
constraints hi are ane functions. Q 0,
It was shown by Martin in 1985 that the broader class Q[(dR/dQ)(1 + ) (dC/dQ)] = 0,
of functions in which KKT conditions guarantees global R(Q) C(Q) Gmin 0,
optimality are the so-called Type 1 invex functions.[9][10] 0,
[R(Q) C(Q) Gmin ] = 0.
4.1 Second-order sucient conditions Since Q = 0 would violate the minimum prot constraint,
we have Q > 0 and hence the third condition implies that
the rst condition holds with equality. Solving that equal-
For smooth, non-linear optimization problems, a second
ity gives
order sucient condition is given as follows. Consider

x , , that nd a local minimum using the Karush
KuhnTucker conditions above. With such that strict
dR/dQ = (dC/dQ).
complementarity is held at x (i.e. all i > 0 ), then for 1+
all s = 0 such that Because it was given that dR/dQ and dC/dQ are strictly
positive, this inequality along with the non-negativity
[ ]T condition on guarantees that is positive and so the
g(x ) h(x ) revenue-maximizing rm operates at a level of output at
, s=0
x x which marginal revenue dR/dQ is less than marginal cost
dC/dQ a result that is of interest because it contrasts
(where the bracketed expression is a row vector), the fol- with the behavior of a prot maximizing rm, which op-
lowing equation must hold; erates at a level at which they are equal.
3

6 Value function 8 See also


If we reconsider the optimization problem as a maximiza- Farkas lemma
tion problem with constant inequality constraints,v.

9 References
Maximize f (x)
[1] Boyd, Stephen; Vandenberghe, Lieven (2004). Convex
to subject Optimization. Cambridge: Cambridge University Press.
p. 244. ISBN 0-521-83378-7. MR 2061575.

[2] Kuhn, H. W.; Tucker, A. W. (1951). Nonlinear pro-


gramming. Proceedings of 2nd Berkeley Symposium.
gi (x) ai , hj (x) = 0. Berkeley: University of California Press. pp. 481492.
MR 47303.
The value function is dened as
[3] W. Karush (1939). Minima of Functions of Several
Variables with Inequalities as Side Constraints. M.Sc.
Dissertation. Dept. of Mathematics, Univ. of Chicago,
V (a1 , . . . , an ) = sup f (x) Chicago, Illinois.
x
[4] Kjeldsen, Tinne Ho (2000). A contextualized his-
to subject torical analysis of the Kuhn-Tucker theorem in non-
linear programming: the impact of World War II.
Historia Math. 27 (4): 331361. MR 1800317.
doi:10.1006/hmat.2000.2289.
gi (x) ai , hj (x) = 0
[5] Ruszczyski, Andrzej (2006). Nonlinear Optimization.
Princeton, NJ: Princeton University Press. ISBN 978-
0691119151. MR 2199043.

j {1, . . . , }, i {1, . . . , m}. [6] Dimitri Bertsekas (1999). Nonlinear Programming (2


ed.). Athena Scientic. pp. 329330. ISBN
9781886529007.
(So the domain of V is {a Rm | some forx
X, gi (x) ai , i {1, . . . , m}}. ) [7] Chandler Davis (1954). Theory of Positive Linear De-
pendence. American Journal of Mathematics. 76 (4):
Given this denition, each coecient, i , is the rate at 733746.
which the value function increases as ai increases. Thus
if each ai is interpreted as a resource constraint, the co- [8] Rodrigo Eustaquio; Elizabeth Karas; Ademir Ribeiro.
ecients tell you how much increasing a resource will Constraint Qualication for Nonlinear Programming
increase the optimum value of our function f. This in- (PDF) (Technical report). Federal University of Parana.
terpretation is especially important in economics and is [9] Martin, D. H. (1985). The Essence of Invex-
used, for instance, in utility maximization problems. ity. J. Optim. Theory Appl. 47 (1): 6576.
doi:10.1007/BF00941316.

[10] Hanson, M. A. (1999). Invexity and the Kuhn-Tucker


7 Generalizations Theorem. J. Math. Anal. Appl. 236 (2): 594604.
doi:10.1006/jmaa.1999.6484.
With an extra constant multiplier 0 , which may be zero,
in front of f (x ) the KKT stationarity conditions turn [11] Chiang, Alpha C. Fundamental Methods of Mathematical
Economics, 3rd edition, 1984, pp. 750752.
into


m
10 Further reading

0 f (x ) + i gi (x ) + j hj (x ) = 0,
i=1 j=1 Andreani, R.; Martnez, J. M.; Schuverdt, M. L.
(2005). On the relation between constant posi-
which are called the Fritz John conditions. tive linear dependence condition and quasinormal-
The KKT conditions belong to a wider class of the rst- ity constraint qualication. Journal of Optimiza-
order necessary conditions (FONC), which allow for non- tion Theory and Applications. 125 (2): 473485.
smooth functions using subderivatives. doi:10.1007/s10957-004-1861-9.
4 11 EXTERNAL LINKS

Avriel, Mordecai (2003). Nonlinear Programming:


Analysis and Methods. Dover. ISBN 0-486-43227-
0.

Boyd, S.; Vandenberghe, L. (2004). Convex Op-


timization. Cambridge University Press. ISBN 0-
521-83378-7.
Nocedal, J.; Wright, S. J. (2006). Numerical Opti-
mization. New York: Springer. ISBN 978-0-387-
30303-1.

Sundaram, Rangarajan K. (1996). Inequality Con-


straints and the Theorem of Kuhn and Tucker. A
First Course in Optimization Theory. New York:
Cambridge University Press. pp. 145171. ISBN
0-521-49770-1.

11 External links
KarushKuhnTucker conditions with derivation
and examples

Examples and Tutorials on the KKT Conditions


5

12 Text and image sources, contributors, and licenses


12.1 Text
KarushKuhnTucker conditions Source: https://en.wikipedia.org/wiki/Karush%E2%80%93Kuhn%E2%80%93Tucker_conditions?
oldid=794841112 Contributors: Jdpipe, Michael Hardy, Karada, Jitse Niesen, Enochlau, Giftlite, Jyril, BenFrantzDale, Fintor, TedPavlic,
Bender235, Petrus~enwiki, Landroni, Oleg Alexandrov, Smmurphy, Qwertyus, Rjwilmsi, Rsrikanth05, Kruosio, Zvika, SmackBot, In-
verseHypercube, Molerat, Cronholm144, Daneshsepanta~enwiki, Hu12, Momet, A. Pichler, Cydebot, DumbBOT, Headbomb, EdJohn-
ston, Wl219, JAnDbot, Sabamo, Albmont, Reminiscenza, Brusegadi, EagleFan, David Eppstein, Mateoee, Shapeev, Hokit, STBotD,
Amiri1982~enwiki, TXiKiBoT, Jmath666, WereSpielChequers, Alt-sysrq, Rinconsoleao, MATThematical, Muro Bot, Tayste, Addbot,
Zuliangjiang, Aliercan, TotientDragooned, Yobot, Rubinbot, Materialscientist, ArthurBot, Xqbot, Isheden, FrescoBot, EeX2, Math-
HisSci, Alexeicolin, Kiefer.Wolfowitz, Lundburgerr, Mechatoot, Duoduoduo, Hjjalal, RjwilmsiBot, Alph Bot, Dcirovic, 2andrewknyazev,
Zfeinst, Ernesto.bosch, ClueBot NG, Jean-Charles.Gilbert, Sensei2011, Koertefa, MrBau, Marcocapelle, FiveColourMap, Onnheimm,
Manoguru, Saung Tadashi, User 99 119, PiotrGregorczyk, Onmyphd, Collinslyle, Loraof, Luis Goslin, Dm1911, Meta2phy, EggsIn-
MyPockets, Clauariel, Dmildy, Liyuan Cao, Goofy 42 and Anonymous: 129

12.2 Images
File:Inequality_constraint_diagram.svg Source: https://upload.wikimedia.org/wikipedia/commons/5/5d/Inequality_constraint_
diagram.svg License: CC BY-SA 3.0 Contributors: http://www.onmyphd.com/?p=kkt.karush.kuhn.tucker Original artist: Onmyphd

12.3 Content license


Creative Commons Attribution-Share Alike 3.0

Das könnte Ihnen auch gefallen