Classification
(2nd ed.)
David G. Stork
Solution Manual
to accompany Pattern
Solution Manual to accompany
Pattern Classiﬁcation (2nd ed.) by R. O. Duda, P. E. Hart and D. G. Stork
David G. Stork
Copyright 2001. All rights reserved.
This Manual is for the sole use of designated educators and must not be distributed to students except in short, isolated portions and in conjunction with the use of Pattern Classiﬁcation (2nd ed.)
June 18, 2003
Preface
In writing this Solution Manual I have learned a very important lesson. As a student, I thought that the best way to master a subject was to go to a superb university and study with an established expert. Later, I realized instead that the best way was to teach a course on the subject. Yet later, I was convinced that the best way was to write a detailed and extensive textbook. Now I know that all these years I have been wrong: in fact the best way to master a subject is to write the Solution Manual. In solving the problems for this Manual I have been forced to confront myriad technical details that might have tripped up the unsuspecting student. Students and
teachers can thank me for simplifying or screening out problems that required pages of unenlightening calculations. Occasionally I had to go back to the text and delete the
word “easily” from problem references that read “it can easily be shown (Problem
Throughout, I have tried to choose data or problem conditions that are particularly instructive. In solving these problems, I have found errors in early drafts of this text (as well as errors in books by other authors and even in classic refereed papers), and thus the accompanying text has been improved for the writing of this Manual.
I have tried to make the problem solutions selfcontained and selfexplanatory. I
have gone to great lengths to ensure that the solutions are correct and clearly presented — many have been reviewed by students in several classes. Surely there are errors and typos in this manuscript, but rather than editing and rechecking these solutions over months or even years, I thought it best to distribute the Manual, however ﬂawed, as early as possible. I accept responsibility for these inevitable errors, and humbly ask anyone ﬁnding them to contact me directly. (Please, however, do not ask me to explain a solution or help you solve a problem!) It should be a small matter to change the Manual for future printings, and you should contact the publisher to check that you have the most recent version. Notice, too, that this Manual contains a list of known typos and errata in the text which you might wish to photocopy and distribute to students.
I have tried to be thorough in order to help students, even to the occassional fault of
verbosity. You will notice that several problems have the simple “explain your answer
in words” and “graph your results.” These were added for students to gain intuition and a deeper understanding. Graphing per se is hardly an intellectual challenge, but if the student graphs functions, he or she will develop intuition and remember the problem and its results better. Furthermore, when the student later sees graphs of data from dissertation or research work, the link to the homework problem and the material in the text will be more readily apparent. Note that due to the vagaries of automatic typesetting, ﬁgures may appear on pages after their reference in this Manual; be sure to consult the full solution to any problem.
I have also included worked examples and so sample ﬁnal exams with solutions to
).”
1
2
cover material in text. I distribute a list of important equations (without descriptions) with the exam so students can focus understanding and using equations, rather than memorizing them. I also include on every ﬁnal exam one problem verbatim from a homework, taken from the book. I ﬁnd this motivates students to review carefully their homework assignments, and allows somewhat more diﬃcult problems to be in cluded. These will be updated and expanded; thus if you have exam questions you ﬁnd particularly appropriate, and would like to share them, please send a copy (with solutions) to me. It should be noted, too, that a set of overhead transparency masters of the ﬁgures from the text are available to faculty adopters. I have found these to be invaluable for lecturing, and I put a set on reserve in the library for students. The ﬁles can be accessed through a standard web browser or an ftp client program at the Wiley STM ftp area at:
ftp://ftp.wiley.com/public/sci tech med/pattern/ or from a link on the Wiley Electrical Engineering software supplements page at:
http://www.wiley.com/products/subject/engineering/electrical/ software supplem elec eng.html
I have taught from the text (in various stages of completion) at the University
of California at Berkeley (Extension Division) and in three Departments at Stan ford University: Electrical Engineering, Statistics and Computer Science. Numerous students and colleagues have made suggestions. Especially noteworthy in this re gard are Sudeshna Adak, Jian An, SungHyuk Cha, Koichi Ejiri, Rick Guadette, John Heumann, Travis Kopp, Yaxin Liu, Yunqian Ma, Sayan Mukherjee, Hirobumi Nishida, Erhan Oztop, Steven Rogers, Charles Roosen, Sergio Bermejo Sanchez, God fried Toussaint, Namrata Vaswani, Mohammed Yousuf and Yu Zhong. Thanks too go to Dick Duda who gave several excellent suggestions.
I would greatly appreciate notices of any errors in this Manual or the text itself. I
would be especially grateful for solutions to problems not yet solved. Please send any
such information to me at the below address. I will incorporate them into subsequent releases of this Manual. This Manual is for the use of educators and must not be distributed in bulk to students in any form. Short excerpts may be photocopied and distributed, but only in conjunction with the use of Pattern Classiﬁcation (2nd ed.).
I wish you all the best of luck in teaching and research.
Ricoh Innovations, Inc. 2882 Sand Hill Road Suite 115 Menlo Park, CA 940257022 USA stork@rii.ricoh.com
David G. Stork
Contents
Preface
1 Introduction 
5 

2 Bayesian decision theory 
7 

Problem Solutions . . . 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
7 
Computer Exercises 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
74 
3 Maximum likelihood and Bayesian parameter estimation 
77 

Problem Solutions . . . 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
77 
Computer Exercises 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
130 
4 Nonparametric techniques 
131 

Problem Solutions . . . 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
131 
Computer Exercises 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
174 
5 Linear discriminant functions 
177 

Problem Solutions . . . 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
177 
Computer Exercises 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
217 
6 Multilayer neural networks 
219 

Problem Solutions . . . 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
219 
Computer Exercises 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
254 
7 Stochastic methods . . . 
255 

Problem Solutions 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
255 
Computer Exercises 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
276 
8 Nonmetric methods . . . 
277 

Problem Solutions Computer Exercises 
. . 
. . 
. . 
. . 
. . 
. . 
. . 
. . 
. . 
. . 
. . 
. . 
. . 
. . 
. . 
. . 
. . 
. . 
. . 
. . 
. . 
. . 
. . 
. . 
. . 
. . 
. . 
. . 
. . 
. . 
277 294 
9 Algorithmindependent machine learning 
295 

Problem Solutions . . . 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
295 
Computer Exercises 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
304 
10 Unsupervised learning and clustering 
305 

Problem Solutions . . . 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
305 
Computer Exercises 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
355 
Sample ﬁnal exams and solutions 
357 

3 
4
CONTENTS
Worked examples 
415 

Errata and ammendations in the text 
417 

First and second printings . 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
417 

Fifth printing 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
443 
Chapter 1
Introduction
Problem Solutions
There are neither problems nor computer exercises in Chapter 1.
5
6
CHAPTER 1. INTRODUCTION
Chapter 2
Bayesian decision theory
Problem Solutions
Section 2.1
1. Equation 7 in the text states
P (errorx) = min[P (ω _{1} x),P(ω _{2} x)].
(a) 
We assume, without loss of generality, that for a given particular x we have P(ω _{2} x) ≥ P(ω _{1} x), and thus P (errorx) = P (ω _{1} x). We have, moreover, the normalization condition P (ω _{1} x)=1−P(ω _{2} x). Together these imply P (ω _{2} x) > 1/2 or 2P (ω _{2} x) > 1 and 
2P(ω _{2} x)P(ω _{1} x) > P(ω _{1} x) = P (errorx). 

This is true at every x, and hence the integrals obey 

2P(ω _{2} x)P(ω _{1} x)dx ≥ P (errorx)dx. 

In short, 2P (ω _{2} x)P(ω _{1} x) provides an upper bound for P (errorx). 

(b) 
From part (a), we have that P (ω _{2} x) > 1/2, but in the current conditions not greater than 1/α for α < 2. Take as an example, α = 4/3 and P (ω _{1} x)=0.4 and hence P (ω _{2} x)=0.6. In this case, P (errorx)=0.4. Moreover, we have 
αP (ω _{1} x)P(ω _{2} x)=4/3 × 0.6 × 0.4 < P (errorx). 

This does not provide an upper bound for all values of P (ω _{1} x). 

(c) 
Let P (errorx) = P (ω _{1} x). In that case, for all x we have 
P(ω _{2} x)P(ω _{1} x)
P(ω _{2} x)P(ω _{1} x)dx
<
<
and we have a lower bound.
7
P(ω _{1} x)P (errorx)
P(ω _{1} x)P (errorx)dx,
8
CHAPTER 2. BAYESIAN DECISION THEORY
(d) The solution to part (b) also applies here.
Section 2.2
2. We are given that the density is of the form p(xω _{i} ) = ke ^{−}^{}^{x}^{−}^{a} ^{i} ^{}^{/}^{b} ^{i} .
(a)
We seek k so that the function is normalized, as required by a true density. We integrate this function, set it to 1.0,
⎣
k ⎡
a
i
−∞
exp[(x − a _{i} )/b _{i} ]dx +
∞
a i
exp[−(x − a _{i} )/b _{i} ]dx ⎦ ⎤ = 1,
which yields 2b _{i} k = 1 or k = 1/(2b _{i} ). Note that the normalization is independent of a _{i} , which corresponds to a shift along the axis and is hence indeed irrelevant to normalization. The distribution is therefore written
(b)
(c)
p(xω _{i} ) =
^{1}
2b _{i}
_{e} −x−a _{i} /b _{i} _{.}
The likelihood ratio can be written directly:
p(xω )
2 ) ^{=} b 2
1
b
1
p(xω
exp − ^{}^{x} ^{−} ^{a} ^{1} ^{}
b
1
_{+} x − a _{2} 
b
2
.
For the case a _{1} = 0, a _{2} = 1, b _{1} = 1 and b _{2} = 2, we have the likelihood ratio is
p(xω _{2} ) p(xω _{1} ) ^{=}
⎧
⎨
⎩
_{2}_{e} (x+1)/2
_{2}_{e} (1−3x)/2
_{2}_{e} (−x−1)/2
as shown in the ﬁgure.
p(xω _{1} ) p(xω _{2} )
x
_{x} _{≤} _{0} _{0} _{<} _{x} _{≤} _{1} _{x} _{>} _{1}_{,}
Section 2.3
3. We are are to use the standard zeroone classiﬁcation cost, that is λ _{1}_{1} = λ _{2}_{2} = 0
and λ _{1}_{2} = λ _{2}_{1} = 1.
PROBLEM SOLUTIONS
9
(a)
We have the priors P (ω _{1} ) and P (ω _{2} )=1 − P(ω _{1} ). The Bayes risk is given by Eqs. 12 and 13 in the text:
R(P(ω _{1} )) = P (ω _{1} ) p(xω _{1} )dx + (1 − P (ω _{1} )) p(xω _{2} )dx.
R
2
R
1
To obtain the prior with the minimum risk, we take the derivative with respect to P (ω _{1} ) and set it to 0, that is
(b)
d
_{(}_{ω} 1 _{)} R(P(ω _{1} )) = p(xω _{1} )dx − p(xω _{2} )dx = 0,
dP
R
2
R
1
which gives the desired result:
R
2
p(xω _{1} )dx = p(xω _{2} )dx.
R
1
This solution is not always unique, as shown in this simple counterexample. Let P(ω _{1} ) = P(ω _{2} )=0.5 and
p(xω _{1} ) 
= 
p(xω _{2} ) 
= 
1 
−0.5 ≤ x ≤ 0.5 
0 
otherwise 
1 
0 ≤ x ≤ 1 
0 
otherwise. 
It is easy to verify that the decision regions R _{1} = [−0.5, 0.25] and R _{1} = [0, 0.5] satisfy the equations in part (a); thus the solution is not unique.
4. Consider the minimax criterion for a twocategory classiﬁcation problem.
(a) The total risk is the integral over the two regions R _{i} of the posteriors times their costs:
R = [λ _{1}_{1} P(ω _{1} )p(xω _{1} ) + λ _{1}_{2} P(ω _{2} )p(xω _{2} )] dx
R 1
+ [λ _{2}_{1} P(ω _{1} )p(xω _{1} ) + λ _{2}_{2} P(ω _{2} )p(xω _{2} )] dx.
R 2
We use ^{} p(xω _{2} ) dx = 1 − ^{} p(xω _{2} ) dx and P (ω _{2} )=1 − P(ω _{1} ), regroup to
ﬁnd:
R
2
R
1
R = λ 22 + λ 12
R
1
p(xω _{2} ) dx − λ _{2}_{2} p(xω _{2} ) dx
R
1
+ P(ω _{1} ) (λ _{1}_{1} − λ _{2}_{2} ) + λ _{1}_{1} p(xω _{1} ) dx − λ _{1}_{2} p(xω _{2} ) dx
R
2
R
1
+
λ 21 p(xω _{1} ) dx + λ _{2}_{2} p(xω _{2} ) dx
R
2
R
1
10
CHAPTER 2. BAYESIAN DECISION THEORY
(b)
(c)
= λ _{2}_{2} + (λ _{1}_{2} − λ _{2}_{2} ) p(xω _{2} ) dx
R 1
+P(ω _{1} ) (λ _{1}_{1} − λ _{2}_{2} )+(λ _{1}_{1} + λ _{2}_{1} ) p(xω _{1} )
R
2
+ (λ 22 − λ 12 )
R 1
p(xω _{2} ) dx .
dx
Consider an arbitrary prior 0 < P ^{∗} (ω _{1} ) < 1, and assume the decision boundary has been set so as to achieve the minimal (Bayes) error for that prior. If one holds the same decision boundary, but changes the prior probabilities (i.e., P (ω _{1} ) in the ﬁgure), then the error changes linearly, as given by the formula in part (a). The true Bayes error, however, must be less than or equal to that (linearly bounded) value, since one has the freedom to change the decision boundary at each value of P (ω _{1} ). Moreover, we note that the Bayes error is 0 at P (ω _{1} )=0 and at P (ω _{1} ) = 1, since the Bayes decision rule under those conditions is to always decide ω _{2} or ω _{1} , respectively, and this gives zero error. Thus the curve of Bayes error rate is concave down for all prior probabilities.
^{P}^{(}^{ω} ^{1} ^{)}
According to the general minimax equation in part (a), for our case (i.e., λ _{1}_{1} = λ _{2}_{2} = 0 and λ _{1}_{2} = λ _{2}_{1} = 1) the decision boundary is chosen to satisfy
p(xω _{1} )
R
2
dx = p(xω _{2} ) dx.
R
1
We assume that a single decision point suﬃces, and thus we seek to ﬁnd x ^{∗} such that
^{∗}
x
−∞
2
N(μ _{1} ,σ ) dx =
1
∞
N(μ _{2} ,σ ) dx,
2
2
x
^{∗}
2
where, as usual, N (μ _{i} ,σ ) denotes a Gaussian. We assume for deﬁniteness and without loss of generality that μ _{2} > μ _{1} , and that the single decision point lies between the means. Recall the deﬁnition of an error function, given by Eq. 96
i
PROBLEM SOLUTIONS
11
in the Appendix of the text, that is,
erf(x) =
^{2}
^{√}
π
x
0
e ^{−}^{t} ^{2} dt.
We can rewrite the above as
erf[(x ^{∗} − μ _{1} )/σ _{1} ] = erf[(x ^{∗} − μ _{2} )/σ _{2} ].
If the values of the error function are equal, then their corresponding arguments must be equal, that is
(x ^{∗} − μ _{1} )/σ _{1} = (x ^{∗} − μ _{2} )/σ _{2}
and solving for x ^{∗} gives the value of the decision point
(d)
(e)
(f)
_{x} _{∗} _{=} ^{} μ _{2} σ _{1} +
μ
_{1} σ _{2}
σ _{1} + σ _{2}
.
Because the mimimax error rate is independent of the prior probabilities, we can choose a particularly simple case to evaluate the error, for instance, P (ω _{1} ) = 0. In that case our error becomes
E = 1/2 − erf[(x ^{∗} − μ _{1} )/σ _{1} ]=1/2 − erf ^{μ} σ ^{2} _{1} ^{σ} (σ ^{1} _{1} ^{−} + ^{μ} σ ^{1} ^{σ} _{2} ) ^{2} .
We substitute the values given in the problem into the formula in part (c) and ﬁnd
_{x} _{∗} _{=} μ 2 σ 1 +
μ
1 σ 2
σ _{1} + σ _{2}
_{=} 1/2+0
1+1/2
= 1/3.
The error from part (d) is then
E = 1/2 − erf ^{1}^{/}^{3} 1+0 ^{−} ^{0} = 1/2 − erf[1/3] = 0.1374.
Note that the distributions have the same form (in particular, the same vari ance). Thus, by symmetry the Bayes error for P (ω _{1} ) = P ^{∗} (for some value P ^{∗} ) must be the same as for P (ω _{2} ) = P ^{∗} . Because P (ω _{2} )=1 − P(ω _{1} ), we know that the curve, analogous to the one in part (b), is symmetric around the point P (ω _{1} )=0.5. Because the curve is concave down, therefore it must peak at P (ω _{1} )=0.5, that is, equal priors. The tangent to the graph of the error versus P (ω _{1} ) is thus horizontal at P (ω _{1} )=0.5. For this case of equal priors, the Bayes decision point for this problem can be stated simply: it is the point midway between the means of the two distributions, that is, x ^{∗} = 5.5.
5. We seek to generalize the notion of minimax criteria to the case where two inde pendent prior probabilities are set.
12
CHAPTER 2. BAYESIAN DECISION THEORY
(a)
(b)
x
We use the triangle distributions and conventions in the ﬁgure. We solve for the decision points as follows (being sure to keep the signs correct, and assuming that the decision boundary consists of just two points):
P(ω _{1} ) ^{δ} ^{1} ^{−} ^{(}^{x} ^{∗}
δ
2
1
−
μ _{1} )
1
= P(ω _{2} ) ^{δ} ^{2} ^{−} ^{(}^{μ} ^{2} ^{−} ^{x} ^{∗}
1
)
δ
2
2
,
which has solution
P(ω _{1} )δ δ _{1} + P(ω _{1} )δ μ _{1} − P(ω _{2} )δ δ _{2} + P(ω _{2} )μ _{2} δ 2 2 1 2 2 2 
2 1 

2 
2 
. 

P(ω _{1} )δ 2 + P(ω _{2} )δ 1 ∗ _{2} − μ _{2} ) = P(ω _{3} ) ^{δ} ^{3} ^{−} ^{(}^{x} 
∗ _{2} − μ _{3} ) 
, 

δ 
2 2 
δ 
2 3 
x
^{∗}
1 _{=}
An analogous derivation for the other decision point gives:
P(ω _{2} ) ^{δ} ^{2} ^{−} ^{(}^{x}
which has solution
∗
x 2
_{=}
2
−P(ω _{2} )δ μ _{2} + P(ω _{2} )δ
3
2
3
2
δ _{2} + P(ω _{3} )δ δ _{3} + P(ω _{3} )δ
2
2
2
μ
3
P(ω _{2} )δ
2
3
+
P(ω _{3} )δ
2
2
.
3
Note that from our normalization condition, i=1 P(ω _{i} ) = 1, we can express all
priors in terms of just two independent ones, which we choose to be P (ω _{1} ) and
i ∗ and integrate, but it is just a bit
P(ω _{2} ). We could substitute the values for x
simpler to go directly to the calculation of the error, E, as a function of priors
P(ω _{1} ) and P (ω _{2} ) by considering the four contributions:
E =
P(ω _{1} )
1
2δ
2
1
[μ _{1} + δ _{1} − x ^{∗} ] ^{2}
1
+P(ω _{2} )
+P(ω _{2} )
1
2δ
1
2
2
2δ
2
2
[δ _{2} − μ _{2} + x ^{∗} ] ^{2}
1
[μ _{2} + δ _{2} − x
∗
2 ^{]} 2
+ ^{} 1 − P(ω _{1} ) − P(ω _{2} )
1 


2δ 
2 



3 
P(ω _{3} )
[δ _{3} − μ _{3} + x _{2} ] ^{2} .
∗
PROBLEM SOLUTIONS
13
To obtain the minimax solution, we take the two partial and set them to zero. The ﬁrst of the derivative equations,
∂E
∂P(ω _{1} ) ^{=}
^{0}^{,}
yields the equation
^{} μ _{1} + δ _{1} − x ^{∗} ^{} ^{2}
1
δ
1
μ _{1} + δ _{1} − x ^{∗}
1
δ
1
_{=}
_{=}
^{} δ _{3} − μ _{3} + x
∗
2
δ
3
δ _{3} − μ _{3} + x
∗
2
δ
3
.
2
or
Likewise, the second of the derivative equations,
∂E ∂P(ω _{2} ) ^{=} ^{0}^{,}
yields the equation
^{} δ _{2} − μ _{2} + x ^{∗} ^{} ^{2} _{+} ^{} μ _{2} + δ _{2} − x
1
δ
2
∗
2
δ
2
^{} ^{2} _{=} ^{} δ _{3} − μ _{3} + x
∗
2
δ
3
2
.
These simultaneous quadratic equations have solutions of the general form:
x
∗
i
_{=} b i + ^{√} c i
a i
i = 1,2.
After a straightforward, but very tedious calculation, we ﬁnd that:
a _{1}
b _{1}
c _{1}
=
=
=
2
δ
−δ
1
− δ
2
2
+ δ
2
3 ,
2
1
δ _{2} − δ _{1} δ
2
2 −
Viel mehr als nur Dokumente.
Entdecken, was Scribd alles zu bieten hat, inklusive Bücher und Hörbücher von großen Verlagen.
Jederzeit kündbar.