Sie sind auf Seite 1von 30

Journal of Geodesy (1998) 72: 71±100

Generalized inverses of nonlinear mappings


and the nonlinear geodetic datum problem
A. Dermanis
Department of Geodesy and Surveying, The Aristotle University of Thessaloniki, University Box 503, GR-54006 Thessaloniki, Greece
Phone: +30 31 996111; fax: +30 31 996137; email: dermanis@topo.auth.gr

Received: 11 April 1996 / Accepted: 19 April 1997

Abstract. Motivated by the existing theory of the nature of the problem has been clari®ed in two
geometric characteristics of linear generalized inverses important papers by Grafarend and Scha€rin (1974,
of linear mappings, an attempt is made to establish a 1976) while the relation among various solutions, as well
corresponding mathematical theory for nonlinear gen- as Meissl's ``inner'' solution have been established with
eralized inverses of nonlinear mappings in ®nite- the introduction of the famous Baarda S-transformation
dimensional spaces. The theory relies on the concept (Baarda 1973; see also Mierlo 1980; Koch 1982). It
of ®berings consisting of disjoint manifolds (®bers) in might therefore seem that every aspect of this problem
which the domain and range spaces of the mappings are has been fully investigated and well understood for quite
partitioned. Fiberings replace the quotient spaces gen- some time now, though some recent work (Xu 1995)
erated by some characteristic subspaces in the linear might point to the contrary. However, with the excep-
case. In addition to the simple generalized inverse, the tion of some simple cases, such as that of levelling in a
minimum-distance and the x0 -nearest generalized inverse small area, the mathematical treatment of the problem is
are introduced and characterized, in analogy with the con®ned to its linearized version, although it is well
least-squares and the minimum-norm generalized in- understood as a nonlinear problem. Nonlinear adjust-
verses of the linear case. The theory is specialized to the ment has been extensively studied in the geodetic
geodetic mapping from network coordinates to observ- literature, especially from the viewpoint of nonlinear
ables and the nonlinear transformations (Baarda's S- least squares, see, e.g., Teunissen (1985, 1989a, 1989b,
transformations) between di€erent solutions are de®ned 1990), Grafarend and Scha€rin (1991) and Lohse
with the help of transformation parameters obtained (1994), where further references can be found. A
from the solution of nonlinear equations. In particular, completely di€erent approach is that of Dermanis and
the transformations from any solution to an x0 -nearest Sanso (1995), where optimal nonlinear estimators have
solution (corresponding to Meissl's inner solution) are been investigated from a strictly probabilistic point of
given for two- and three-dimensional networks for both view. In both approaches the datum problem is solved in
the similarity and the rigid transformation case. Finally an implicit way. In the nonlinear least-squares adjust-
the nonlinear theory is specialized to the linear case with ment solved by an iteration scheme, the choice of datum
the help of the singular-value decomposition and depends on the choice of the initial parameter values
algebraic expressions with speci®c geometric meaning used for starting the iteration process, as well as on the
are given for all possible types of generalized inverses. principle used for ``improving'' these parameters. In the
nonlinear estimation case, the need to adopt a Bayesian
point of view in order to obtain a meaningful estimation
Key words. Generalised inverse á Nonlinearity á independent of the unknown ``true'' parameter values
Pseudoinverse á Inverse problems á Datum problem also solves the datum problem in an implicit way, where
the datum choice is hidden in the choice of the prior
probability distribution of the parameters.
The solution to the nonlinear datum problem pre-
1 Introduction sented here is based on the concept of the S-transfor-
mation and has the form of such a similarity (or rigid)
The datum problem or zero-order design problem transformation with parameters which come from the
(Grafarend 1974) arising in the adjustment of observa- solution of a system of nonlinear equations.
tions related to geodetic networks has received consid- Another important aspect of the geodetic datum
erable attention since the pioneering work of Meissl problem in its linear form is its relation to generalized
(1965, 1969) and its popularization by Blaha (1971). The inverses of matrices (linear operators) which led Bjer-
72

hammar (1951) to an independent introduction of the out full rank in the linear case), the determination of
Moore-Penrose generalized inverse, later than Moore parameter values is not trivial anymore, because there is
(1920) but before Penrose (1955). The question arises an in®nite set of parameter values which f maps on the
whether the nonlinear version of the geodetic datum same manifold point corresponding to the adjusted ob-
problem bears a similar relation to some type of gen- servations. The datum problem is exactly the problem of
eralized inverses of nonlinear operators. An attempt will choosing one out of all possible parameter sets.
be made to look mainly into the geometric aspects of Of course rank de®ciency in an observational model
such nonlinear generalized inverses, although the is not exclusively related to the datum problem, as
building of a concrete mathematical theory requires a demonstrated, e.g., in Dermanis and Grafarend (1981).
more rigorous treatment which is beyond our present Additional rank de®ciency may result from the inability
scope. We shall base our investigation on the represen- of the available observations to recover the shape (or
tation theory of various types of linear generalized in- shape and size) of a geodetic network. Such cases, al-
verses which has been introduced by Takos (1976) from though partly covered by the treatment in Sect. 3, need
an algebraic point of view and especially by Teunissen individual treatment and cannot be part of the general
(1985) from a geometric point of view. approach taken here, where our main subject is the
The datum problem is always a part of the problem of common datum defect resulting only from the use of
the adjustment of redundant observations which are re- coordinates as parameters.
lated to a set of parameters (coordinates in the geodetic We must point out that the above point of view is not
case) which in fact cannot be determined from the coordinate-free, since it depends on the choice of a
available observations. The reason is that the informa- speci®c set of m unknown parameters. This is perhaps of
tion contained in the angle and distance (angle) obser- little concern to geodesy where the choice of coordinates
vations relates only to the shape and size (shape) of the as parameters imposes itself as a matter of convenience.
network, while coordinates relate in addition to its posi- It is possible, however, to establish a theory where both
tion (position and size) with respect to a certain reference the adjustment and the datum problem are treated in a
frame. The problem of placing the network in relation to coordinate-free way in the spirit of modern di€erential
a given reference frame can be also seen from the inverse geometry.
point of view of placing a reference frame in relation to a A more in-depth introduction to the (nonlinear) da-
given (i.e., physically existing) network. This choice of tum problem, especially in relation to the modelling
reference frame (or datum in geodetic terminology) poses problem, can be found in Dermanis (1991).
a problem, the datum problem, which must be solved in an
arbitrary but consistent way based in the introduction of
additional information not contained in the observations. 2 Geometric characteristics of generalized
The usual approach to the description of the adjust- inverses of linear operators
ment and choice of datum problem is to consider the n
observed parameters (n > r) as the coordinates of a A natural point of departure for the study of the
linear space Y , called the observation space, in which the nonlinear datum problem is our knowledge of the
r-dimensional manifold M modeling the physical sys- simpler linear case. For this reason we shall review some
tem, called the model manifold, is lying. (We con®ne of the geometric characteristics of the theory of
ourselves to the case of discrete and ®nite observations.) generalized inverses of linear mappings and point out
The model manifold is described as the range of a those which have proven to be more appropriate for
nonlinear operator f from an m-dimensional parameter generalization to the nonlinear case. Our exposition will
space X (m > r) into the observation space Y . The be more casual than in the rest of this work, since a
mapping f is established by the mathematical equations rigorous and thorough study exists already in the
which relate all observables to the unknown parameters. geodetic literature (Teunissen 1985).
In a ``normal'' situation, which is almost never the case A linear mapping f : X ! Y is characterized geo-
in geodesy, the number of parameters and the dimension metrically by its range
of the model manifold are equal (m ˆ r), in which case
the restriction fjM of f to M has an inverse which may R…f † ˆ fy 2 Y jy ˆ f …x† for some x 2 X g
serve as a coordinate mapping from M to X ˆ Rm . In
other words the chosen parameters can serve as a par- which is a linear subspace of Y and its null space
ticular system of coordinates on the model manifold and N …f † ˆ fx 2 X jf …x† ˆ 0g
the only problem to be solved is the adjustment prob-
lem. In the linear case the rank of f is r ˆ m and the which is a linear subspace of X . To any element y ˆ f …x†
corresponding model is called a full-rank model. of R…f † corresponds an ane subspace x ‡ N …f † of X
The unavoidable observational errors are added to with elements which f maps to the same element
the true values of the observables (which correspond to y ˆ f …x†. The parallel translates of N …f † are thus the
a point on the model manifold) and give observations ``solution spaces'' of f which correspond one-to-one to
corresponding to a known point outside the manifold. the elements of R…f †.
The adjustment problem can be simply de®ned as the If X is m-dimensional, Y is n-dimensional and rank
problem of ®nding an optimal way to ``return to the …f † ˆ dim R…f † ˆ r then d ˆ m ÿ r is the injectivity de-
model manifold''. In the case where m > r (model with- fect of f , while f ˆ n ÿ r is its surjectivity defect (usually
73

called degrees of freedom). Furthermore N …f † is a d-


dimensional subspace of X and R…f † is an r-dimensional
subspace of Y .
Coming to a linear generalized inverse g of f , it fol-
lows from the de®ning property f ˆ f  g  f that both
the linear operators p ˆ f  g and q ˆ g  f , are idem-
potent and thus linear projectors. As such they are
completely characterized by their range (invariant linear
subspace) and their complementary null-space (linear
subspace along which they project), see e.g. Halmos
(1974, Sect. 41), Scha€rin et al. (1977).
We shall ®rst show that R…p† ˆ R…f †. For every
y 2 Y ; y^  p…y† ˆ f …g…y†† 2 R…f †, implying that R…p†
 R…f †. On the other hand, for any y^ 2 R…f † there exists
x 2 X such that y^ ˆ f …x† and p…^ y † ˆ …f  g†…f …x††
ˆ …f  g  f †…x† ˆ f …x† ˆ y^, implying that R…f †  R…p†.
Therefore R…p† ˆ R…f †.
R…p† ˆ R…f † is an r-dimensional (linear) subspace of
Y . The linear projector p is completely determined if also
the f -dimensional subspace C  Y is given such that
Y ˆ R…f †  C and y ÿ p…y† 2 C for every y 2 Y . Thus p Fig. 1. The geometry of the generalized inverse g of a linear mapping
is a linear projection on R…f † along C. f . Here x ˆ g…y†; y^ ˆ f …x† ˆ p…y†, while x^ ˆ g…^
y † ˆ q…x†. All the
elements of y^ ‡ N …g† are mapped into x, while all the elements of
The elements of C are projected by p to the zero
y ‡ C are mapped into x^
element of Y , i.e., C ˆ N …p† ˆ N …f  g†.
Indeed for any y 2 C it must hold that
y^ ˆ p…y† 2 R…f † while y ÿ y^ 2 C. But C is a linear sub-
space, and if y 2 C and y ÿ y^ 2 C so is their di€erence, g…0† ˆ 0 and all elements of any ane subspace
i.e., y ÿ …y ÿ y^† ˆ y^ 2 C. x ‡ N …f † are projected by q on the same element x^.
Now y^ 2 C and y^ 2 R…f †, while C \ R…f † ˆ f0g, and Therefore q projects elements of X on S ˆ g…R…f ††
therefore y^ ˆ 0. along N …f † and X ˆ N …f †  S. Since N …f † is d-dimen-
From the fundamental property f ˆ f  g  f and the sional, S will be an r-dimensional subspace of X .
well-known property for the rank of matrix product An important property of S is that it intersects any
(mapping composition) it follows that rg ˆ rank…g† solution space (ane subspace) x ‡ N …f † of f in a single
cannot be less than the rank r of f , i.e., rg  r. N …g† is element.
thus a subspace of Y with dimension fg ˆ n ÿ rg To see this let both x^1 and x^2 belong to S \ ‰x ‡ N …f †Š.
 n ÿ r ˆ f. Since x^1 2 ‰x ‡ N …f †Š; x ÿ x^1 2 N …f †; f …x ÿ x^1 † ˆ 0 and
We shall show that in fact N …g†  C: if y 2 N …g† then q…x ÿ x^1 † ˆ g…f …x ÿ x^1 †† ˆ g…0† ˆ 0, implying that
g…y† ˆ 0 and p…y† ˆ f …g…y†† ˆ f …0† ˆ 0, i.e., also q…^x1 † ˆ q…x†. Since x^1 2 S it holds that q…^ x1 † ˆ x^1 and
y 2 N …p† ˆ C and therefore N …g†  C. thus x^1 ˆ q…x†. With similar reasoning x^2 ˆ q…x† and
For every y^ 2 R…f † the ane subspace y^ ‡ C consists therefore x^1 ˆ x^2 .
of all elements of Y which p maps on the same element y^. R…f † and S are subspaces of the same dimension
The elements of any particular ane subspace y ‡ N …g† which are in a one-to-one correspondence. If y^ 2 R…f †
are mapped on the same element x ˆ g…y† 2 R…g†  X . then y^ ˆ f …x† for some x 2 X and g…^ y † ˆ g…f …x††
Further mapping of this particular element x by f to ˆ q…x†  x^ 2 S. Also x^ 2 ‰x ‡ N …f †Š and x^ ˆ g…^ y † ˆ q…x†
y^ ˆ f …x† has as a consequence that all elements of is the unique element in the intersection of x ‡ N …f † and
y ‡ N …g† are mapped by p ˆ f  g on the same element xg ˆ ‰x ‡ N …f †Š \ S. For any element x^ 2 S there
S, i.e., f^
y^ ˆ p…y†. Therefore y ‡ N …g†  p…y† ‡ C ˆ y ‡ C, and corresponds a unique element y^ 2 R…f † such that
furthermore N …g†  C as already shown. A similar sit- y^ ˆ f …^x† and x^ ˆ g…^y †. We may say that the restriction
uation holds for the linear projector q. For any of g to R…f † is the ordinary inverse of the restriction of f
x 2 X ; x^ ˆ q…x† ˆ …g  f †…x† 2 g…R…f ††  R…g†. on S: gjR…f † ˆ …fjS †ÿ1 . In symmetry with S ˆ g…R…f †† it
The subspace S ˆ R…q† ˆ g…R…f ††  R…g† is the sub- holds also that R…f † ˆ f …S†.
space onto which q maps elements of X . If two elements Once S is given g is de®ned on R…f †. It remains to be
x1 , x2 of X are mapped onto the same element x^ 2 S, de®ned outside R…f †. Let y 2 R…f † and y^ ˆ p…y†. Then
then x^ ˆ q…x1 † ˆ q…x2 † implies that g…f …x1 † ÿ f …x2 †† ˆ 0 y ÿ y^ 2 C. Taking into account that N …g†  C we can
and f …x1 † ÿ f …x2 † 2 N …g†  C. But due to Y ˆ R…f †  C decompose
we have R…f † \ C ˆ f0g and R…f † \ N …g† ˆ f0g. Since y ÿ y^ ˆ …y ÿ y^†N ‡ …y ÿ y^†C 0
f …x1 † ÿ f …x2 † ˆ f …x1 ÿ x2 † belongs to both R…f † and
N …g† it must hold that f …x1 † ÿ f …x2 † ˆ 0 and into a part …y ÿ y^†N 2 N …g† and a part …y ÿ y^†C0 2 C 0
x1 ÿ x2 2 N …f †. where C 0 is a complement of N …g† with respect to
Conversely, if x1 ÿ x2 2 N …f † then f …x1 † ÿ f …x2 † ˆ C : C ˆ N …g†  C 0 . This decomposition is not unique but
f …x1 ÿ x2 † ˆ 0, implying q…x1 † ÿ q…x2 † ˆ g…f …x1 ÿ x2 †† ˆ depends on the speci®c choice of the subspace C 0  C
74

which is of dimension dimC 0 ˆ dimC ÿ dimN …g† ˆ


f ÿ …n ÿ rg † ˆ n ÿ r ÿ …n ÿ rg † ˆ rg ÿ r  Dr.
On the basis of the decomposition
y ˆ y^ ‡ …y ÿ y^† ˆ y^ ‡ …y ÿ y^†N ‡ …y ÿ y^†C0
and the linearity of g we have:
y † ‡ g……y ÿ y^†N † ‡ g……y ÿ y^†C0 †
g…y† ˆ g…^
ˆ g…^ y † ‡ g……y ÿ y^†C 0 †:
The g…^ y † part is uniquely de®ned: the subspaces R…f †
and C determine y^ and g…^ y † is the unique element in the
intersection of S with the ane subspace x ‡ N …f † which
f maps into y^ (the solution space of y^). It remains to
determine how g acts on the subspace C 0 . This is the only
required information to de®ne completely the action of g
in addition to a description of S and C. If a basis is
known for N …g† and a basis for C 0 they constitute
together a basis of C which is thus de®ned. The
generalized inverse g is speci®ed if (in addition to a
basis for S) we know how it acts on all the rg ÿ r
elements of the basis of C 0 .
A unique choice of C 0 can be speci®ed without am-
biguity (if so desired) by requiring in addition that C 0 is
the orthogonal complement of N …G† with respect to C,
in which case C assumes an orthogonal decomposition
C ˆ N …g† ? C 0 .
Let D ˆ g…C† be the image of C under g. If y 2 C we
have the unique decomposition y ˆ yN ‡ yC0 where
yN 2 N …G† and yC 0 2 C 0 . In this case g…y† ˆ g…yN †‡
g…yC 0 † ˆ g…yC0 † and g…C† ˆ g…C 0 † ˆ D. If y 2 C ˆ N …p†
Fig. 2. More geometry of the generalized inverse of a linear mapping:
then p…y† ˆ 0 ˆ …f  g†…y† ˆ f …g…y†† and g…y† 2 N …f † so the intersection of y ‡ R…f † and y ‡ C 0 determines the unique point y
that g…C† ˆ D  N …f †. which g (the generalized inverse of f ) maps into x ˆ g…y†. The image
The generalized inverse g is a one-to-one mapping g…y† is uniquely determined from the intersection of g…y† ‡ S
from C 0 to D and dim D ˆ dim C 0 ˆ Dr  d ˆ dim N …f †, (=image of y ‡ C 0 ) and g…y† ‡ D (=image of y ‡ R…f † under C)
the last inequality following from the fact that
D  N …f †.
The range R…g† ˆ g…Y † can be decomposed according C…y ‡ R…f †† then g…y† is determined from the inter-
to Y ˆ R…f †  C as R…g† ˆ g…R…f ††  g…C†, and since section
g…R…f †† ˆ S and g…C† ˆ D, we have that R…g† ˆ S  D. ‰g…yR † ‡ DŠ \ C…y ‡ R…f †† ˆ fg…y†g
We introduce now an alternative perspective to the
geometry of a generalized inverse which might seem at In order to specify a unique g out of the class of all
®rst to be unnecessarily complicated, but it will prove possible generalized inverses of f , we must specify the
helpful in the study of the nonlinear case. subspaces C complementary to R…f †, S complementary
To de®ne g on Y it is sucient to de®ne it on a to N …f † and D  N …f †. Furthermore we must specify the
subspace M complementary to N …g†, since for restricted linear mapping gjC 0 : C 0 ! g…C 0 † ˆ D which
Y ˆ N …g†  M we have a unique decomposition brings the elements of C 0 into a one-to-one correspon-
y ˆ yN ‡ yM with yN 2 N …g†, yM 2 M and g…y† ˆ g…yM †. dence with those of D.
Such a subspace M is provided by M ˆ C 0  R…f †. If Let us look next into the special case where g is a
y 2 M with unique decomposition y ˆ yC 0 ‡ yR , re¯exive generalized inverse, i.e., when g  f  g ˆ g
yC0 2 C 0 ; yR 2 R…f † then g…y† ˆ g…yC 0 † ‡ g…yR †, where also. In this particular case rg ˆ r necessarily …rg  r
g…yC 0 † 2 D and g…yR † 2 S. The ane subspace y ‡ C 0 ˆ follows from the fundamental property and rg  r from
yR ‡ C 0 of M is mapped into g…yR † ‡ D ˆ g…y† ‡ D which the re¯exivity property). It follows that S ˆ R…g† and
is an ane subspace of R…g†. The ane subspace C ˆ N …g†. Thus p ˆ f  g is a projector on R…f † along
y ‡ R…f † of M is mapped into an ane subspace N …g† and q ˆ g  f is a projector on S ˆ R…g† along
g…y† ‡ S ˆ g…yC 0 † ‡ S of R…g†. The image g…y† is the N …f †. Let us notice that a re¯exive generalized inverse is
unique intersection of these two ane subspaces of R…g†. completely determined once the subspaces C and S are
Since g…y† ‡ D ˆ g…yR † ‡ D is uniquely determined from known.
the yR component of y, it remains to specify a corre- A least-squares generalized inverse is one with
spondence C from the ane subspaces y ‡ R…f † of M to C ˆ R…f †? , in which case we have the orthogonal
the ane subspaces g…y† ‡ S of R…f †. If g…y† ‡ S ˆ decomposition Y ˆ R…f † ? R…f †, and y^ ˆ p…y† is the
75

closest element to y from R…f †. The projector p is the f …x1 † ˆ f …x2 †. To every element y 2 R…f † in the range
orthogonal projector on R…f †. R…f † of f corresponds a ®ber Fy 2 F de®ned by
A minimum-norm generalized inverse is one for
which S ˆ N …f †? , in which case g…y† is the element of Fy ˆ fx 2 X jf …x† ˆ yg …1†
the solution ane subspace corresponding to y^ ˆ p…y† The function f gives rise to a bijection
which has the smallest norm. The projector q is the or-
thogonal projector on N …f †? . f: R…f † ! F : y ! Fy
From all the preceding geometric characteristics of a
generalized inverse g : Y ! X of a linear mapping Obviously for every x 2 X ; x 2 Ff …x† .
f : X ! Y , we summarize those which will be helpful in
dealing with nonlinear mappings. De®nition. The mapping p : X ! F : x ! Ff …x† is called
The space Y is ``sliced'' by p ˆ f  g into ane sub- the projection mapping from X to the ®bering F. It holds
spaces (p-slices) which are parallel to a speci®c (linear) that p ˆ f  f .
subspace C. Each slice corresponds to a particular ele- When more than one ®berings are involved we denote
ment y^ of R…f †  Y , since p ˆ f  g maps all the ele- p by pF . Thus pF …x† is the unique ®ber from the ®bering
ments of the slice onto that y^. F which passes through a given point x.
The mapping f ``slices'' the space X into ane sub-
spaces (f -slices) which are parallel to the subspace N …f †. De®nition. A section of a ®bering F of X is the range
Each slice corresponds to a particular element y^ of R…f † S ˆ R…s† of a mapping s : F ! X such that p  s ˆ idF ,
since f maps all the elements of the slice on that y^, we where idF is the identity mapping in F.
may therefore call this slice ``solution space of y^''.
The generalized inverse g maps R…f † to a subset S of A section S of a ®bering F intersects any one of its
R…g† which has the property that it intersects each so- ®bers in a single element: for any x 2 S; S \ pF …x† ˆ fxg.
lution space (f -slice) in a single element.
The mapping q ˆ g  f slices X into q-slices, each De®nition. Let F and K be two ®berings of the same set
slice corresponding to a particular element x^ of S, since q A. K is called a re®nement of F if every ®ber Kt of K is
maps all the elements of the slice on that x^. The q-slices a subset of some ®ber Fs of F : 8 Kt 2 K : 9 Fs 2 F such
are identical to the f -slices. that Kt  Fs .
Finally g ``slices'' Y into ane subspaces which are If K is a re®nement of F then pK …z†  pF …z† for
parallel to the subspace N …g†. These g-slices have the any z.
property that each is contained in one of the p-slices.
Thus each p-slice is itself further sliced in a more ``re- De®nition. Two ®berings F and H of the same space X
®ned'' set of g-slices. Only in the case of a re¯exive are called complementary if every ®ber of F is a section
generalized inverse are the p-slices and g-slices of H, and vice versa: pF …x† \ pH …x† ˆ fxg; 8 x 2 X .
identical.
These slices, all of which are ane subspaces, De®nition. A ®bering F of X induces a ®bering
can be described by a single ``generating'' linear sub- F0 ˆ FjM on a manifold M  X with ®bers pF0 …x† ˆ
space (f-slices by N …f †, g-slices, and q-slices by N …g†, p- M \ pF …x† for every x 2 M. The ®bering F0 ˆ FjM is
slices by C). Of course, this possibility cannot survive in called the restriction of the ®bering F on M.
the nonlinear case.
Example 1. We shall use the simplest possible geodetic
3 Generalized inverses of nonlinear operators network in order to illustrate some of the results.
Following Grafarend and Scha€rin (1974) we consider a
Let X ; Y be ®nite-dimensional spaces and f : X ! Y a three-point horizontal network P1 P2 P3 with coordinates
(nonlinear) mapping from X to Y . The question arises
whether it is possible to de®ne, under some appropriate x ˆ ‰x1 y1 x2 y2 x3 y3 ŠT
conditions, a class of mappings g : Y ! X which can where all sides a ˆ P2 P3 ; b ˆ P1 P3 ; c ˆ P1 P2 , and all
serve as generalized inverses of the mapping f . The angles A; B; C at points P1 ; P2 ; P3 , respectively, have been
similar theory developed for linear mappings may serve observed. The observation vector
as a guide to a certain extend, although the nonlinearity
of f poses problems which do not allow for a direct y ˆ ‰a b c A B CŠT
extension of the existing theory.
is a function y ˆ f …x† of the coordinates, explicitly given
De®nition. A partition of a set A into disjoint sets At , by
A ˆ [t At it is called a ®bering of A, and its elements At q
are called ®bers. Such a partition can arise from an a ˆ …x3 ÿ x2 †2 ‡ …y3 ÿ y2 †2
equivalence relation on A, each ®ber consisting of q
equivalent elements of A (Loomis and Sternberg 1980.) b ˆ …x3 ÿ x1 †2 ‡ …y3 ÿ y1 †2
q
A mapping f de®ned on X gives rise to a ®bering F
of X through the equivalence relation x1  x2 if c ˆ …x2 ÿ x1 †2 ‡ …y2 ÿ y1 †2
76

…x3 ÿ x1 † …x2 ÿ x1 † These three conditions reduce the dimension of Fy to


A ˆ arctan ÿ arctan 6 ± 3 ˆ 3. If x0 is a ®xed element of X the solution space
…y3 ÿ y1 † …y2 ÿ y1 †
…x2 ÿ x1 † …x3 ÿ x2 † through x0 is the ®ber pF …x0 † and x 2 pF …x0 † if it
B ˆ arctan ÿ arctan satis®es the three conditions
…y2 ÿ y1 † …y3 ÿ y2 †
…x3 ÿ x2 † …x3 ÿ x1 † …x3 ÿ x2 †2 ‡ …y3 ÿ y2 †2 ˆ …x03 ÿ x02 †2 ‡ …y03 ÿ y02 †2
C ˆ arctan ÿ arctan
…y3 ÿ y2 † …y3 ÿ y1 † …x3 ÿ x1 †2 ‡ …y3 ÿ y1 †2 ˆ …x03 ÿ x01 †2 ‡ …y03 ÿ y01 †2
or …x2 ÿ x1 †2 ‡ …y2 ÿ y1 †2 ˆ …x02 ÿ x01 †2 ‡ …y02 ÿ y01 †2
…x3 ÿ x1 †…x2 ÿ x1 † ‡ …y3 ÿ y1 †…y2 ÿ y1 † which correspond to f …x† ˆ f …x0 † expressed by a ˆ a0 ;
A ˆ arcos qq
…x3 ÿ x1 †2 ‡ …y3 ÿ y1 †2 …x2 ÿ x1 †2 ‡ …y2 ÿ y1 †2 b ˆ b0 ; c ˆ c0 . An alternative equivalent set of condi-
tions is
…x3 ÿ x2 †…x1 ÿ x2 † ‡ …y3 ÿ y2 †…y1 ÿ y2 †
B ˆ arcos qq bc cos A ˆ b0 c0 cos A0
…x3 ÿ x2 †2 ‡ …y3 ÿ y2 †2 …x1 ÿ x2 †2 ‡ …y1 ÿ y2 †2
ac cos B ˆ a0 c0 cos B0
…x3 ÿ x2 †…x3 ÿ x1 † ‡ …y3 ÿ y2 †…y3 ÿ y1 † ab cos C ˆ a0 b0 cos C0
C ˆ arcos qq
…x3 ÿ x2 †2 ‡ …y3 ÿ y2 †2 …x3 ÿ x1 †2 ‡ …y3 ÿ y1 †2
which in terms of coordinates are
We now consider the range R… f †. When y ˆ f …x† for …x3 ÿ x1 †…x2 ÿ x1 † ‡ …y3 ÿ y1 †…y2 ÿ y1 †
some x, i.e., when y 2 R…f †, then the six elements of y are
not independent. Only three parameters, say a; b; c; are ˆ …x03 ÿ x01 †…x02 ÿ x01 † ‡ …y03 ÿ y01 †…y02 ÿ y01 †
independent. The remaining ones, the angles, are …x2 ÿ x1 †…x3 ÿ x1 † ‡ …y2 ÿ y1 †…y3 ÿ y1 †
uniquely determined from well-known trigonometric ˆ …x02 ÿ x01 †…x03 ÿ x01 † ‡ …y02 ÿ y01 †…y03 ÿ y01 †
relations, e.g.,
…x3 ÿ x2 †…x1 ÿ x2 † ‡ …y3 ÿ y2 †…y1 ÿ y2 †
b2 ‡ c2 ÿ a2 ˆ …x03 ÿ x02 †…x01 ÿ x02 † ‡ …y03 ÿ y02 †…y01 ÿ y02 †
cos A ˆ ;
2bc
a2 ‡ c2 ÿ b2 We now return to the theory. Instead of speaking of
cos B ˆ ; the ®bering F of X induced by a certain mapping f , we
2ac
may take the inverse approach and describe a given
a2 ‡ b2 ÿ c2 ®bering by means of a mapping. For a mapping to in-
cos C ˆ
2ab duce a ®bering on its domain space it is necessary that it
This means that R…f † is of dimension 6 ± 3 ˆ 3, since is not injective, otherwise we obtain the ``trivial ®bering''
the three cosine conditions satis®ed by its elements where each ®ber consists of a single element of X !
reduce the number of independent parameters by three. The use of a mapping which, as our mapping of in-
We may think of q ˆ ‰abcŠT as three curvilinear terest f , is not surjective, is not the most ecient way for
coordinates on R…f † which in this case obtains the describing a ®bering. More straightforward is the use of
direct representation y ˆ y…q†, explicitly given by a surjective mapping, say h : X ! Rq , where to every
element d 2 Rq corresponds a ®ber Hd ˆ fx 2 X jh…x†
a ˆ a; b ˆ b; cˆc ˆ dg from the ®bering H induced by h. The ®bers Hd
are manifolds of dimension m ÿ q ˆ dim X ÿ q. A more
general way of describing a ®bering is by means of a
b2 ‡ c2 ÿ a2
A ˆ arcos ; mapping
2bc
a2 ‡ c2 ÿ b2 v : X  Rq ! Rq : …x; d† ! v…x; d†
B ˆ arcos ;
2ac with q < m, such that vd …†  v…; d† is surjective for
a2 ‡ b2 ÿ c2 every d 2 Rq . This is guaranteed when od
ov
6ˆ 0 at every x
C ˆ arcos
2ab and d. The ®bers are the (m ÿ q)-dimensional manifolds
We now move on to the solution space Fy for a ®xed Hd ˆ fx 2 X jv…x; d† ˆ 0g
y 2 R…f †; x 2 Fy whenever f …x† ˆ y. Since A; B; C are
determined from a; b; c; the only independent conditions The previous case corresponds to the choice
on x are those related to the given values of a; b; c : v…x; d† ˆ h…x† ÿ d:
Another approach to the description of ®berings of a
q space X is by means of mappings having X as their range
a ˆ …x3 ÿ x2 †2 ‡ …y3 ÿ y2 †2 rather than their domain as before. A bijective mapping
q
b ˆ …x3 ÿ x1 †2 ‡ …y3 ÿ y1 †2 / : Rq  Rmÿq ! X : …d; p† ! /…d; p†
q
describes a ®bering H with ®bers Hd ˆ fx 2 X jx
c ˆ …x2 ÿ x1 †2 ‡ …y2 ÿ y1 †2 ˆ /…d; x†g. Each ®ber is the range Hd ˆ R…/d † of the
77

mapping /d …† ˆ /…d; †. The inverse mapping /ÿ1 is in r  rg  min…n; m† …5†
fact a coordinate mapping on X where the coordinates
…d; p† ˆ /ÿ1 …x† are ``adapted'' to the ®bering: each ®ber Once S is given, g is de®ned on R…f †  Y : The question
corresponds to a ®xed value of d, while the ``free'' is how this particular g is extended outside R…f †. A step
parameters p may serve as (intrinsic) coordinates on the in this direction is to note that as a consequence of (G1)
®ber. both q ˆ g  f and p ˆ f  g are idempotent mappings
We assume for the moment that f is de®ned on the …q2 ˆ q; p2 ˆ p†
whole of X , it is continuous, and has a di€erential f  g  f ˆ f ) …g  f †  …g  f † ˆ g  f and
mapping dfx at each x which is a linear mapping from
the tangent vector space Tx at x to the tangent vector …f  g†  …f  g† ˆ f  g …6†
space Tf …x† at f …x† (Choquet-Bruhat et al. 1977, p. 121).
which can be considered as ``nonlinear projections''
The mapping dfx is de®ned by
q ˆ g  f : X ! R…g  f †  R…g†  X
dfx : Tx ! Tf …x† : v ! u
…2† p ˆ f  g : Y ! R…f  g† ˆ R…f †  Y …7†
u…f  /† ˆ v…/† for every f : X ! R
We have set R…g  f †  R…g†, which is obvious, and
(where the tangent vectors v and u are visualized as R…f  g† ˆ R…f † according to the previous lemma.
``directional derivatives'' acting on functions). Elements belonging to the range of idempotent
We further assume that dfx has constant rank over X , mappings are invariant under the mapping. Idempotent
rank…dfx † ˆ r  min…n; m†; where m ˆ dim X and mappings induce also a ®bering of the space they act on.
n ˆ dim Y . Consequently R…f † is an r-dimensional sub- We denote by P, Q the ®berings induced by p and q,
manifold of Y . respectively, with corresponding elements Py ;y 2 R…p†
We now come to the possibility of de®ning a gener- ˆ R…f † and Qx ;x 2 R…q†  R…g†.
alized inverse g of the mapping f by a similar way as in The mapping g gives rise to a ®bering G of Y with
the linear case. elements Gx ˆ fy 2 Y jg…y† ˆ xg. Obviously for every
element y of Y it holds that y 2 Gg…y† ˆ pG …y†. If
De®nition. A mapping g : Y ! X is called a generalized
y 2 R…f † then y 2 Gx ˆ pG …f …x††, where fxg ˆ Fy \ S:
inverse of a given mapping f : X ! Y when
f gf ˆf …3† …G1† Lemma. For the ®berings of the space Y it holds that the
®bering G induced by g is a re®nement of the ®bering P
[We denote Eq. (3) by (G1).] This means that for every induced by p ˆ f  g.
x 2 X ; …f  g  f †…x† ˆ f …x†, so that for every y ˆ f …x†
2 R…f † it holds that …f  g†…y† ˆ f …g…y†† ˆ y. As a Proof. For any z 2 Y ; pG …z† 2 G and pP …z† 2 P. For
consequence g…y† 2 Fy , i.e., g must map every element every y 2 pG …z†; g…y† ˆ g…z† ) p…y† ˆ f …g…y†† ˆ f …g…z†† ˆ
of R…f † into an element of its corresponding ®ber. In p…z† and y 2 pP …z†: Consequently pG …z†  pP …z† for any
other words g maps R…f † onto a section S of the ®bering z 2 Y and thus G is a re®nement of P. (
F where S  R…g†. The restriction of g on R…f † is a
bijection between R…f † and S which are both r-dimen-
sional manifolds.
The relation (G1) satis®ed by the generalized inverse
g implies a corresponding relation between the di€er-
entials dfx and dgf …x† of f and g respectively:
dfx  dgf …x†  dfx ˆ dfx …4†
which follows by implementing the implicit function
theorem (Choquet-Bruhat et al. 1977, p. 91).

Lemma. R…f  g† ˆ R…f †.

Proof. y 2 R…f † ) 9 x 2 X:
y ˆ f …x† ˆ …f  g  f †…x† ˆ …f  g†‰f …x†Š
ˆ …f  g†…y† 2 R…f  g† ) R…f †  R…f  g†
which combined with the obvious relation
R…f  g†  R…f † implies that in fact R…f  g† ˆ R…f †: (
Assuming that g has a constant rank rg ˆ rank…dgy † Fig. 3. The geometry of the generalized inverse g of a nonlinear
for every y, application of the property of the rank of a mapping f . All elements of pG …y† are mapped into the same element
linear map composition r…AB†  min‰r…A†; r…B†Š to g…y†. The elements of pP …y† are mapped into elements of pF …x† and
Eq. (3), gives r  min…r; rg †  rg so that thus projected by p on the same y^
78

Example 1 (continued). We claim that the mapping g ®xed y 2 R…f † must simultaneously satisfy the condi-
de®ned by x ˆ g…y†, explicitly tions for x 2 Fy

c ÿ b sin A a ÿ c ÿ b cos A …x3 ÿ x2 †2 ‡ …y3 ÿ y2 †2 ˆ a2


x1 ˆ y1 ˆ
3 3 …x3 ÿ x1 †2 ‡ …y3 ÿ y1 †2 ˆ b2
c ÿ b sin A a ‡ 2c ÿ b cos A …x2 ÿ x1 †2 ‡ …y2 ÿ y1 †2 ˆ c2
x2 ˆ y2 ˆ
3 3
and the conditions for x 2 R…g†
c ‡ 2b sin A a ÿ c ‡ 2b cos A x1 ˆ x2 ; x1 ‡ x2 ‡ x3 ˆ y 2 ÿ y 1
x3 ˆ y3 ˆ
3 3
The combination gives after some manipulation the ®ve
is a generalized inverse of the previously de®ned conditions
mapping f . To prove this we need to show that, for
any x; y ˆ f …x† is identical to y 0 ˆ …f  g  f †…x† ˆ x1 ˆ x2 ; x1 ‡ x2 ‡ x3 ˆ c; y2 ÿ y1 ˆ c
…f  g†…y† ˆ f …x0 †; where x0 ˆ g…y† is explicitly given by
c ÿ b sin A a ÿ c ÿ b cos A y3 ÿ y1 ˆ b cos A; x3 ÿ x1 ˆ b sin A
x01 ˆ y10 ˆ
3 3
which are those satis®ed by any x 2 Fy0 ˆ Fy \ R…g†:
c ÿ b sin A a ‡ 2c ÿ b cos A The dimension of Fy0 is 6±5 ˆ 1. The ®bers Fy as y runs
x02 ˆ y20 ˆ through R…f † constitute a ®bering F of X , while the
3 3 ®bers Fy0 constitute the ®bering F0 induced by F on
R…g†:
c ‡ 2b sin A a ÿ c ‡ 2b cos A We now move on to the section S ˆ g…R…f ††. When
x03 ˆ y30 ˆ
3 3 y 2 R…f †; A; B; C are determined from a; b; c and so too
Since for an arbitrary x; y ˆ f …x† 2 R…f †, the angle A x ˆ g…y†. From the de®nition of g and the fact that
satis®es the cosine condition and thus y 0 ˆ f …x0 † be- p
comes b2 ‡ c2 ÿ a2 4b2 c2 ÿ …b2 ‡ c2 ÿ a2 †
cos A ˆ ; sin A ˆ
qÿ 2 bc 2 bc
2 ÿ 2
a0 ˆ x03 ÿ x02 ‡ y30 ÿ y20 ˆ b2 ‡ c2 ÿ 2 b c cos A ˆ a
x becomes explicitly
qÿ q
2 ÿ 2
b0 ˆ x03 ÿ x01 ‡ y30 ÿ y10 ˆ b c 4b2 c2 ÿ …b2 ‡ c2 ÿ a2 †2 a c b2 ÿ a2
x1 ˆ ÿ ; y1 ˆ ÿ ÿ
3 6c 3 2 6c
q
ÿ 0 2 ÿ 2
c0 ˆ x2 ÿ x01 ‡ y20 ÿ y10 ˆ c q
c 4b2 c2 ÿ …b2 ‡ c2 ÿ a2 †2 a c b2 ÿ a2
x2 ˆ ÿ ; y2 ˆ ‡ ÿ
Since y 0 2 R…f †, the angles A0 ; B0 ; C 0 are determined from 3 6c 3 2 6c
a0 ˆ a; b0 ˆ b; c0 ˆ c through the cosine conditions and q
are therefore the same as those in y, i.e.,
c 4b2 c2 ÿ …b2 ‡ c2 ÿ a2 †2 ; a b2 ÿ a2
A0 ˆ A; B0 ˆ B; C 0 ˆ C. Therefore y 0 ˆ y and g is in fact x3 ˆ ‡ ; y3 ˆ ‡
a generalized inverse of f . 3 3c 3 3c
Turning to the range R…g† of the generalized inverse
g: for an arbitrary y 2 = R…f †; x ˆ g…y† is given from the The elements of x 2 S are functions of three parameters
de®ning equations of g which show that x satis®es at a; b; c: Therefore S has dimension three and it is
least one condition, namely x1 ˆ x2 . The remaining ®ve described by x ˆ x…q† where q ˆ ‰a b cŠT are curvilinear
equations depend on four parameters, a; b; c; A; so there coordinates on S. The section S can also be described by
must be one more condition which results from the three conditions on the coordinates of its points. The
easily derived relations y2 ÿ y1 ˆ c and x1 ‡ x2 ‡ x3 ˆ c: obvious one is x1 ˆ x2 , the second results from
Therefore x 2 R…g† if its elements satisfy x1 ‡ x2 ‡ x3 ˆ c ˆ y2 ÿ y1 . The remaining one comes
from y1 ‡ y2 ‡ y3 ˆ a. Thus x 2 S when
x 1 ˆ x2 ; x1 ‡ x2 ‡ x3 ˆ y2 ÿ y1
x1 ˆ x2 ; x1 ‡ x2 ‡ x3 ˆ y 2 ÿ y 1
Thus R…g† has dimension 6±2 ˆ 4. The parameters
q ˆ ‰abcAŠT may serve as curvilinear coordinates for …y1 ‡ y2 ‡ y3 †2 ˆ …x3 ÿ x2 †2 ‡ …y3 ÿ y2 †2
R…g† since the relation x ˆ g…y† degenerates into
x ˆ x…q†: The ®rst two conditions are those satis®ed by the
We now consider the ®bering F0 induced by F on elements of range R…g† and therefore S ˆ g…R…f ††
R…g†. The elements of R…g† which satisfy f …x† ˆ y for a  R…g† ˆ g…Y †.
79

For the mapping p ˆ f  g and the ®bers pP …y†: set- p T


sin A ˆ 1 ÿ cos2 A ˆ
ting y 0 ˆ p…y† ˆ f …g…y†† ˆ f …x†, where x ˆ g…y†, and bc
using the de®nitions of f and g we obtain the explicit  2 2
2 2 2 b ‡ c2 ÿ a2
representation of p: where T ˆ b c ÿ
p 2
a0 ˆ b2 ‡ c2 ÿ 2 bc cos A Using the explicit form of y ˆ f …x† we evaluate
b0 ˆ b
b2 ‡ c2 ÿ a2
c0 ˆ c ˆ …x3 ÿ x1 †…x2 ÿ x1 † ‡ …y3 ÿ y1 †…y2 ÿ y1 †
2
A0 ˆ A T ˆ …x3 ÿ x1 †…y2 ÿ y1 † ÿ …x2 ÿ x1 †…y3 ÿ y1 †
c ÿ b cos A c ÿ b cos A Using the explicit form for f and the relations for cos A
cos B0 ˆ ˆ p
a0 b2 ‡ c2 ÿ 2 bc cos A and sin A just given we obtain the explicit form of
x0 ˆ f …y†:
b ÿ c cos A b ÿ c cos A  
cos C 0 ˆ ˆ p 0 c T 0 a ÿ c 1 b2 ‡ c2 ÿ a2
a 0 x1 ˆ ÿ ; y 1 ˆ ÿ
b ‡ c2 ÿ 2 bc cos A
2
3 3c 3 3c 2
 2 
If we replace b ˆ b0 ; c ˆ c0 ; A ˆ A0 we obtain the three 0 c T 0 a ‡ 2c 1 b ‡ c2 ÿ a2
x2 ˆ ÿ ; y 2 ˆ ÿ
cosine conditions so that y 0 2 R…f † and in fact 3 3c 3 3c 2
R…p† ˆ R…f †, since y 0 depends on three parameters  2 
c 2T a ÿ c 2 b ‡ c ÿ a2 2
b; c; A, and therefore R…q† has the same dimension, x03 ˆ ‡ ; y30 ˆ ‡
three, as R…f †: 3 3c 3 3c 2
For a ®xed element y0 ˆ ‰a0 b0 c0 A0 B0 C0 ŠT , the ®ber which in terms of the elements of x become
pP …y0 † induced by p through y0 consists of all the ele-
ments y such that p…y† ˆ p…y0 † ˆ y 0 , which means that …x03 ÿ x02 †2 ‡ …y30 ÿ y20 †2 ˆ a2 ˆ …x3 ÿ x2 †2 ‡ …y3 ÿ y2 †2
y 2 pP …y0 † when its elements satisfy the condition …x03 ÿ x01 †2 ‡ …y30 ÿ y10 †2 ˆ b2 ˆ …x3 ÿ x1 †2 ‡ …y3 ÿ y1 †2
p…y† ˆ p…y0 ), which in view of the preceding description
of p takes the explicit form …x02 ÿ x01 †2 ‡ …y20 ÿ y10 †2 ˆ c2 ˆ …x2 ÿ x1 †2 ‡ …y2 ÿ y1 †2
p q Comparing these expressions to the identical ones for Fy ,
b2 ‡ c2 ÿ 2 bc cos A ˆ b20 ‡ c20 ÿ 2 b0 c0 cos A0 we conclude that if x0 2 pQ …x† then also x0 2 Ff …x† and
vice versa. Thus the ®bers of Q are identical to the
b ˆ b0 corresponding solution spaces.
c ˆ c0 In order to express x0 ˆ q…x† 2 pQ …x† as a function of
x we must replace a; c; T and 12 …b2 ‡ c2 ÿ a2 † with their
A ˆ A0 expressions in terms of x and thus obtain
q
c ÿ b cos A c0 ÿ b0 cos A0
p ˆ q 0
…x2 ÿ x1 †2 ‡ …y2 ÿ y1 †2
2 2
b ‡ c ÿ 2 bc cos A b20 ‡ c20 ÿ 2 b0 c0 cos A0 x1 ˆ
3
…x3 ÿ x1 †…y2 ÿ y1 † ÿ …x2 ÿ x1 †…y3 ÿ y1 †
b ÿ c cos A b0 ÿ c0 cos A0 ÿ q
p ˆ q
2 2
b ‡ c ÿ 2 bc cos A 3 …x2 ÿ x1 †2 ‡ …y2 ÿ y1 †2
b20 ‡ c20 ÿ 2 b0 c0 cos A0 q q
…x3 ÿ x2 †2 ‡ …y3 ÿ y2 †2 ÿ …x2 ÿ x1 †2 ‡ …y2 ÿ y1 †2
Since the ®rst one and the last two are a direct y10 ˆ
consequence of the remaining three, the only conditions 3
needed so that y 2 pP …y0 † are …x3 ÿ x1 †…x2 ÿ x1 † ‡ …y3 ÿ y1 †…y2 ÿ y1 †
ÿ q
a ˆ any; b ˆ b0 ; c ˆ c0 ; A ˆ A0 ; B ˆ any; C ˆ any 3 …x2 ÿ x1 †2 ‡ …y2 ÿ y1 †2
q
This means that pP …y0 † has dimension 6 ± 3 ˆ 3 and the
…x2 ÿ x1 †2 ‡ …y2 ÿ y1 †2
three parameters a; B; C can serve as a system of three 0
x2 ˆ
curvilinear coordinates on pP …y0 †, which assumes the 3
direct representation y ˆ y…q† ˆ y…q; y0 † with explicit …x3 ÿ x1 †…y2 ÿ y1 † ÿ …x2 ÿ x1 †…y3 ÿ y1 †
ÿ q
form
3 …x2 ÿ x1 †2 ‡ …y2 ÿ y1 †2
a ˆ a; b ˆ b0 ; c ˆ c0 ; A ˆ A0 ; B ˆ B; C ˆ C q q
2 2
…x 3 ÿ x 2 † ‡…y 3 ÿ y 2 † ‡ 2 …x2 ÿ x1 †2 ‡ …y2 ÿ y1 †2
For the mapping q ˆ g  f and the ®bers pQ …x† we set y20 ˆ
x0 ˆ q…x† ˆ g…f …x†† ˆ g…y†; where y ˆ f …x† 2 R…f †, and 3
therefore the angles A; B; C are functions of 2 the sides …x3 ÿ x1 †…x2 ÿ x1 † ‡ …y3 ÿ y1 †…y2 ÿ y1 †
2 2 ÿ q
a; b; c: We need in addition to cos A ˆ b ‡c ÿa
the cor-
2bc 3 …x2 ÿ x1 †2 ‡ …y2 ÿ y1 †2
responding relation
80
q
The re®nement of the ®bering P by the ®bering G,
…x2 ÿ x1 †2 ‡ …y2 ÿ y1 †2
x03 ˆ i.e., for every ®ber Py 2 P its own ®bering with
3 members Gx corresponding to all x 2 Fy .
2…x3 ÿ x1 †…y2 ÿ y1 † ÿ …x2 ÿ x1 †…y3 ÿ y1 † For a given section M of the ®bering G and every
‡ q
y 2 R…f †, a corresponding mapping
3 …x2 ÿ x1 †2 ‡ …y2 ÿ y1 †2
cy : M \ pP …y† ! R…g† \ Fy
q q
which is subject to the following restriction:
…x3 ÿ x2 †2 ‡…y3 ÿ y2 †2 ÿ …x2 ÿ x1 †2 ‡ …y2 ÿ y1 †2
y30 ˆ if pG …y† \ M ˆ fy0 g then cy …y0 † 2 Fy \ S ˆ fg…y†g.
3
2‰…x3 ÿ x1 †…x2 ÿ x1 † ‡ …y3 ÿ y1 †…y2 ÿ y1 †Š Proof. Consider any element z 2 Y . Then the ®bering P
‡ q
3 …x2 ÿ x1 †2 ‡ …y2 ÿ y1 †2 determines a unique ®ber pP …z† containing z, while
y ˆ p…z† 2 R…f † is determined by R…f † \ pP …z† ˆ fyg
From the relations giving x0 in terms of a; b; c it is and pP …y† ˆ pP …z†. Since y uniquely determines Fy ; g…y†
obvious that the range R…q† of q is identical with the is also uniquely determined by Fy \ S ˆ fg…y†g. The
section S. ®bering G determines a unique ®ber pG …z† ˆ Gg…z†
As regards the ®bering G included by g: let containing z and furthermore pG …z† ˆ Gg…z†  pP …z†
y0 ˆ ‰a0 b0 c0 A0 B0 C0 ŠT be a ®xed element and consider ˆ pP …y† due to the fact that G is a re®nement of P.
the ®ber pG …y0 † (through y0 induced by g). For any other Since M is a section of G it holds that pG …z† intersects M
point y 2 pG …y0 † it must hold that g…y† ˆ g…y0 † ˆ x and at a unique element pG …z† \ M ˆ fz0 g. The known
using the explicit form of g we arrive at the conditions mapping cy de®nes an element cy …z0 † 2 R…g† and we
de®ne g…z†  cy …z0 †. It remains to show that the
3x1 ˆ c ÿ b sin A ˆ c0 ÿ b0 sin A0 mapping g de®ned for every z 2 Y by this procedure is
in fact a generalized inverse of f . For an arbitrary x 2 X
3y1 ˆ a ÿ c ÿ b cos A ˆ a0 ÿ c0 ÿ b0 cos A0
consider z ˆ f …x†. Since z 2 R…f † it follows that in this
3x2 ˆ c ÿ b sin A ˆ c0 ÿ b0 sin A0 case y ˆ z, and by the restriction imposed on cy for its
3y2 ˆ a ‡ 2c ÿ b cos A ˆ a0 ‡ 2c0 ÿ b cos A0 action on elements of R…f † it must hold that g…z†
 cy …z† 2 Fz \ S implying that g…z† 2 Fz and therefore
3x3 ˆ c ‡ 2b sin A ˆ c0 ‡ 2b0 sin A0
3y3 ˆ a ÿ c ‡ 2b cos A ˆ a0 ÿ c0 ‡ 2b0 cos A0
The condition for x2 has been discarded because it is
identical to that for x1 . The remaining are not inde-
pendent due to the relation between cos A and sin A.
Thus
y1 ‡ y2 ‡ y3 ˆ a ˆ a0
…x3 ÿ x1 †2 ‡ …y3 ÿ y1 †2 ˆ b2 ˆ b20 ) b ˆ b0
2x1 ‡ x3 ˆ c ˆ c0
Using a ˆ a0 ; b ˆ b0 and c ˆ c0 in the above conditions
yields cos A ˆ cos A0 ; sin A ˆ sin A0 , so that A ˆ A0 .
Therefore, y 2 pG …y0 † for a given ®xed y0 , when its
elements satisfy
a ˆ a0 ; b ˆ b0 ; c ˆ c0 ; A ˆ A0 ; B ˆ any; C ˆ any
The ®ber pG …y0 † has dimension 6 ± 4=2. The elements
q ˆ ‰B CŠT may serve as a system of curvilinear coordi-
nates so that pG …y0 † is described by y ˆ y…q† ˆ y…q; y0 † or
explicitly
a ˆ a0 ; b ˆ b0 ; c ˆ c0 ; A ˆ A0 ; B ˆ B; C ˆ C

Proposition 1. A generalized inverse g of f is uniquely


de®ned if the following are speci®ed:
The range R…g†  X of the generalized inverse g.
The section S ˆ g…R…f ††  R…g† of the ®bering F of
X induced by f .
The ®bering P to be induced by p ˆ f  g, i.e., all
the ®bers Py corresponding to every y 2 R…f †. Fig. 4. An illustration of Proposition 1
81

f …g…z†† ˆ z or f …g…f …x††† ˆ …f  g  f †…x† ˆ f …x† and Since y0 2 R…f † is ®xed, the image g…pP0 …y0 †† consists
since x has been arbitrarily chosen f  g  f ˆ f and g of elements x having x1 ; x2 ; x3 constant, while y1 ; y2 ; y3
is indeed a generalized inverse of f . ( depend on a single parameter a. It has therefore di-
mension 1. Since x0 ˆ g…y0 † is ®xed, x ˆ g…y† may be also
Example 1 (continued). We now consider the choice of a expressed in terms of x0 :
section M of G. We have seen that y 2 pG …y0 † when
x1 ˆ x01 ; y1 ˆ y01 ‡ aÿa
3
0

a ˆ a0 ; b ˆ b0 ; c ˆ c0 ; A ˆ A0 ; B ˆ any; C ˆ any x2 ˆ x02 ; aÿa0


y2 ˆ y02 ‡ 3
In order to create a section M of G we need only to pick x3 ˆ x03 ; y3 ˆ y03 ‡ aÿa
3
0

up one element from each one of its ®bers, and this can or
be done by assigning values to both B and C. If in a ÿ a0
addition we want R…f †  M, we must assign the known x ˆ x0 ‡ ‰0 0 0 1 1 1ŠT
3
values
a2 ‡ c20 ÿ b20 a2 ‡ b20 ÿ c20 We can also express g…pP0 …y0 †† by ®ve conditions.
B0 ˆ arcos 0 and C0 ˆ arcos 0 One possible choice is
2a0 c0 2a0 b0
to B and C, respectively, whenever y0 2 R…f †. x1 ˆ x2 ; x1 ‡ x2 ‡ x3 ˆ c0 ; y2 ÿ y1 ˆ c0
Thus M may be determined from the two conditions x3 ÿ x2 ˆ b0 sin A0 ; y3 ÿ y1 ˆ b0 cos A0
a2 ‡ c 2 ÿ b2 a 2 ‡ b 2 ÿ c2 Comparison with the identical conditions for x 2 Fy00
cos B ˆ ; cos C ˆ
2ac 2ab shows that in fact g…pP0 …y0 †† ˆ Fy00 ˆ Fy0 \ R…g† is an
and it has dimension 6±2=4. M contains the three- element of the ®bering F0 induced by F on R…g†.
dimensional R…f †, which satis®es these conditions and in Returning once again to the theory, we would like the
addition the similar one for cos A. The independent pa- generalized inverse g of a continuous mapping f to be
rameters a; b; c; A may serve as curvilinear coordinates itself continuous, and this means that the mappings cy
for M which is then described by y ˆ y…q†, explicitly cannot be independent. Loosely speaking, they must
a ˆ a; b ˆ b; c ˆ c; A ˆ A map neighboring elements of Y belonging to neighbor-
ing ®bers of P into neighboring elements of X . Thus cy
a2 ‡ c2 ÿ b2 a 2 ‡ b 2 ÿ c2 may be considered as the restriction of a continuous
B ˆ arcos ; C ˆ arcos
2ac 2ab mapping c : R…f †  Y ! X , de®ned by c…y; z† ˆ cy …z†,
which is continuous in both y and z.
We now consider the intersection M \ pP …y0 † ˆ pP0 …y0 †. We can overcome the need to use a family of map-
If we combine the requirements for y 2 M pings cy , one for each y 2 R…f † with the introduction of
a ˆ any; b ˆ any; c ˆ any; A ˆ any additional ®berings for Y and X and a single mapping
between them. Let M be a section of the ®bering G. In
a2 ‡ c2 ÿ b2 a 2 ‡ b 2 ÿ c2 order to de®ne g on pP …y† for every y 2 R…f † it is su-
B ˆ arcos ; C ˆ arcos
2ac 2ab cient to de®ne it on M \ pP …y† since the ®bering G, being
with those for y 2 pP …y0 † a re®nement of P, naturally extends g on the whole of
pP …y†: for any z 2 pP …y† 2= M there exists a unique ®ber
a ˆ any; b ˆ b0 ; c ˆ c0 ; A ˆ A0 ; B ˆ any; C ˆ any pG …z†  pP …z† from G such that pG …z† \ M ˆ fz0 g
 M \ pP …z† and g…z† ˆ g…z0 †. Although any section M
we conclude that y 2 M \ pP …y0 † ˆ pP0 …y0 † when
of G is appropriate, it is more convenient to choose M so
a ˆ any; b ˆ b0 ; c ˆ c0 ; A ˆ A0 that it contains R…f †. This is possible in view of the next
lemma.
a2 ‡ c20 ÿ b20 a2 ‡ b20 ÿ c20
B ˆ arcos ; C ˆ arcos Lemma. For every y 2 R…f †, it holds that pG …y†
2ac0 2ab0
\R…f † ˆ fyg.
This means that pP0 …y0 † ˆ M \ pP …y0 † is of dimension 1,
where the parameter a may serve as the single curvili- Proof. Since G is a re®nement of P; pG …y†  pP …y†.
near coordinate. When, in particular, y0 2 R…f † then Since R…f † is a section of P it holds pG …y† \ R…f † ˆ fyg
b2 ‡c2 ÿa2
A ˆ A0 ˆ arcos 0 2b00c0 0 . which combined with pG …y†  pP …y† yields the desired
From the de®nition of g we see that when y 2 pP0 …y0 † result. (
where y0 2 R…f † then x ˆ g…y† ˆ cy0 …y† is given by The ®bering P induces a ®bering P0 ˆ PjM on M
c0 ÿ b0 sin A0 a ÿ c0 ÿ b0 cos A0 with ®bers pPjM …z† ˆ M \ pP …z†. R…f † is a section of P
x1 ˆ y1 ˆ and since R…f †  M by choice, it is also a section of P0 .
3 3
It is possible to consider a ®bering R of M containing
c0 ÿ b0 sin A0 a ‡ 2c0 ÿ b0 cos A0
x2 ˆ y2 ˆ R…f † which is complementary to P0 . Thus every z 2 M is
3 3 the unique element pR …z† \ pP0 …z† ˆ fzg. The general-
c0 ‡ 2b0 sin A0 a ÿ c0 ‡ 2b0 cos A0 ized inverse g must be a mapping whose restriction to M
x3 ˆ y3 ˆ
3 3 is a one-to-one mapping gjM from M to R…g†, since
82

dim M ˆ dim R…g† ˆ rg . A corresponding choice of


complementary ®berings of R…g† is possible. One is
provided by the ®bering F which induces a ®bering
F0 ˆ FjR…g† on R…g† with ®bers pFjR…g† …x†ˆ pF …x† \ R…g†.
For g to be a generalized inverse it must hold that
g…z† 2 Fp…z† \ R…g† so that each ®ber of P0 is mapped into
the corresponding element of F0 . Since z 2 M is deter-
mined from one ®bering of P0 and one ®bering of R, it
remains to introduce a ®bering S of R…g† complemen-
tary to F0 , such that g…z† is determined by the inter-
section of a ®ber of S with the already speci®ed ®ber of
F0 . More precisely, to the ®ber pP0 …z† 2 P0 in M cor-
responds the ®ber Fp…z† \ R…g† 2 F0 in R…g†, and it
remains to assign to the ®ber pR …z† 2 R in M a ®ber
S 0 2 S in R…g† so that g…z† ˆ ‰Fp…z† \ R…g†Š \ S 0 . In par-
ticular, S must contain the particular section S of F
which must be assigned to R…f † 2 R.

Proposition 2. A generalized inverse g of f , with


rank…g† > rank(f ) is uniquely de®ned if the following
are speci®ed:
A ®bering P of Y having R…f † as a section.
A re®nement G of the ®bering P.
A section M of the ®bering G containing R…f †.
A manifold R…g† of X with dimension rg > r on
which F induces a ®bering F0 of dimension
d ˆ m ÿ r ˆ dim X ÿ rank…f †.
A ®bering S of R…g† complementary to F0 .
A ®bering R of M complementary to the ®bering P0
induced by P on M, which contains R…f †.
A mapping C : R ! S. Fig. 5. An illustration of Proposition 2
The generalized g is de®ned on M by
fg…z†g ˆ ‰Fp…z† \ R…g†Š \ C…pR …z†† where p…z† is
de®ned by fp…z†g ˆ pP0 …z† \ R…f † ˆ pP …z† \ R…f †.
a ˆ k ‡ a0 ; b ˆ b0 ; c ˆ c0 ; A ˆ A0
Proof. We must show that the de®ned mapping g is a
a2 ‡ c2 ÿ b2 a2 ‡ b2 ÿ c2
generalized inverse of f . For an arbitrary x 2 X , y ˆ B ˆ arcos ; C ˆ arcos
f …x† 2 R…f †  M. Since y 2 R…f † we have that p…y† ˆ y, 2ac 2ab
pR …y† ˆ R…f † and C…pR …y†† ˆ C …R…f ††  S 2 S. where y0 ˆ p…y†. From the de®nition of the projection p,
Applying the above de®nition, we have that y0 ˆ p…y† is given by
fg…y†g ˆ ‰Fp…z† \ R…g†Š \ C…pR …y†† p
a0 ˆ b2 ‡ c2 ÿ 2bc cos A; b0 ˆ b; c0 ˆ c; A0 ˆ A
ˆ ‰Fy \ R…g†Š \ C…R…f †† ˆ ‰Fy \ R…g†Š \ S  Fy c ÿ b cos A
B0 ˆ arcos p
and consequently f …g…y†† ˆ y. Since y ˆ f …x† it follows b2 ‡ c2 ÿ 2bc cos A
that f …g…f …x††† ˆ …f  g  f †…x† ˆ f …x† and since x is b ÿ c cos A
arbitrary f  g  f ˆ f . ( C0 ˆ arcos p
b ‡ c2 ÿ 2bc cos A
2

Replacing these values in the preceding description of Rk


Example 1 (continued). We begin with the ®bering R of
it follows that any y 2 Rk satis®es
M containing R…f †. A ®bering R of M can be introduced
p
by ®xing the value of the parameter a along the ®bers a ˆ k ‡ b2 ‡ c2 ÿ 2bc cos A; b ˆ b; c ˆ c
pP0 …y0 † ˆ M \ pP …y0 †, where we can assume without loss
of generality that y0 2 R…f †. If we want R to include a2 ‡ c2 ÿ b2 a2 ‡ b2 ÿ c2
R…f † the value of a ˆ a…y† assigned to y 2 pP0 …y0 † must A ˆ A; B ˆ arcos ; C ˆ arcos
2ac 2ab
be such that a…y0 † ˆ a0 . A simple choice is to set
a ˆ k ‡ a0 , in which case k replaces a as coordinate Rk is therefore of dimension 3, since it satis®es the
along pP0 …y0 †, while k ˆ 0 gives the intersection y0 of 6 ± 3 ˆ 3 conditions
pP0 …y0 † with R…f †. With this choice a ®ber Rk from R is
described by the elements y with …a ÿ k†2 ˆ b2 ‡ c2 ÿ 2 b c cos A
83

a2 ‡ c 2 ÿ b2 a2 ‡ b2 ÿ c2 …a0 ‡ k†2 ‡ c20 ÿ b20


cos B ˆ ; cos C ˆ B ˆ arcos
2ac 2ab 2…a0 ‡ k†c0
In order to describe the image Sk  g…Rk † we need …a0 ‡ k†2 ‡ b20 ÿ c20
®rst C ˆ arcos
2…a0 ‡ k†b0
b2 ‡ c2 ÿ …a ÿ k†2 which uniquely determine y.
cos A ˆ
2rb c The image g…y† of the y is given, according to the
h i de®nition of g, by
4b2 c2 ÿ b2 ‡ c2 ÿ …a ÿ k†2
) sin A ˆ c0 ÿ b0 sin A0 a0 ‡ k ÿ c0 ÿ b0 cos A0
2bc x1 ˆ y1 ˆ
3 3
which used in the de®nition of g yield
r
h i c0 ÿ b0 sin A0 a0 ‡ k ‡ 2c0 ÿ b0 cos A0
2 2 2
4b c ÿ b ‡ c ÿ …a ÿ k† 2 2 x2 ˆ y2 ˆ
c 3 3
x1 ˆ ÿ
3 6c
c0 ‡ 2b0 sin A0 a0 ‡ k ÿ c0 ‡ 2b0 cos A0
a c b2 ÿ …a ÿ k†2 x3 ˆ y3 ˆ
y1 ˆ ÿ ÿ 3 3
3 2 6c
r
h i Let x 2 Sk \ Fy0o ˆ g…Rk † \ g…pP0 …y0 ††. It must simulta-
4b2 c2 ÿ b2 ‡ c2 ÿ …a ÿ k†2 neously satisfy the conditions for x 2 Sk
c
x2 ˆ ÿ
3 6c x1 ˆ x2 ; x1 ‡ x2 ‡ x3 ˆ y2 ÿ y1
a c b ÿ …a ÿ k†2
2
…x3 ÿ x2 † ‡ …y3 ÿ y2 †2 ˆ …y1 ‡ y2 ‡ y3 ÿ k†2
2
y2 ˆ ‡ ÿ
3 2 6c
and those for x 2 Fy00
r
h i
2 4b2 c2 ÿ b2 ‡ c2 ÿ …a ÿ k†2 x1 ˆ x2 ; x1 ‡ x2 ‡ x3 ˆ c 0 ; y2 ÿ y1 ˆ c0
c
x3 ˆ ‡ x3 ÿ x2 ˆ b0 sin A0 ; y3 ÿ y1 ˆ b0 cos A0
6 3c
2 2
a b ÿ …a ÿ k†
y3 ˆ ‡ The combination gives the six conditions
3 3c
Sk has dimension 3 since it is determined by three x1 ˆ x2 ; x1 ‡ x2 ‡ x3 ˆ c0 ; y2 ÿ y1 ˆ c0 ;
parameters a; b; c (k is ®xed). It can also be determined x3 ÿ x2 ˆ b0 sin A0 ; y3 ÿ y1 ˆ b0 cos A0 ;
by three conditions, e.g.,

x 1 ˆ x2 ; x1 ‡ x2 ‡ x3 ˆ y2 ÿ y1 …x3 ÿ x2 †2 ‡ …y3 ÿ y2 †2 ˆ …y1 ‡ y2 ‡ y3 ÿ k†2


which uniquely determine x. Indeed solving this system
…x3 ÿ x2 †2 ‡ …y3 ÿ y2 †2 ˆ …y1 ‡ y2 ‡ y3 ÿ k†2 we get

Let y 2 Rk \ pP0 …y0 †: The y must satisfy simultaneously c0 ÿ b0 sin A0 c0 ‡ 2b0 sin A0
the conditions for y 2 Rk x1 ˆ ˆ x 2 ; x3 ˆ
3 3
a0 ‡ k ÿ c0 ÿ b0 cos A0
…a ÿ k†2 ˆ b2 ‡ c2 ÿ 2 b c cos A y1 ˆ ;
3
a0 ‡ k ‡ 2c0 ÿ b0 cos A0
a2 ‡ c 2 ÿ b2 a2 ‡ b2 ÿ c2 y2 ˆ ;
cos B ˆ ; cos C ˆ 3
2ac 2ab a0 ‡ k ÿ c0 ‡ 2b0 cos A0
y3 ˆ
and the conditions for y 2 pP0 …y0 † 3
b ˆ b0 ; c ˆ c0 ; A ˆ A0 But the above components of x determined by
fxg ˆ Sk \ Fy00 are in fact identical to those of the image
a2 ‡ c20 ÿ b20 a2 ‡ b20 ÿ c20 x ˆ g…y† where y is determined by fyg ˆ Rk \ pP 0 …y0 †.
cos B ˆ ; cos C ˆ Therefore g is uniquely determined from the mapping
2 a c0 2 a b0
C : Rk ! Sk , which to each ®ber Rk 2 R assigns a ®ber
The combination gives the six conditions Sk 2 S, the correspondence having been established
through the common parameter k. The mapping
a ˆ a0 ‡ k; b ˆ b0 ; c ˆ c0 ; A ˆ A0 C : R ! S, maps
84

This means that pP …y 0 † \ R…f † ˆ fyg and R…f † is indeed
Rk ˆ yj…a ÿ k†2 ˆ b2 ‡ c2 ÿ 2bc cos A; a section of P, as required. We can therefore describe
 the ®bers of P using y0 2 R…f †, in which case y 2 pP …y0 †
a2 ‡ c2 ÿ b2 a 2 ‡ b 2 ÿ c2 when
cos B ˆ ; cos C ˆ
2ac 2ab
a ˆ any; b ˆ any; c ˆ c0 ; A ˆ A0 ; B ˆ B0 ; C ˆ any
to

From the preceding conditions it follows that for an
Sk ˆ xjx1 ˆ x2 ; x1 ‡ x2 ‡ x3 ˆ y2 ÿ y1 ; arbitrary y its projection y0 ˆ p…y† ˆ f …g…y†† is given by

2 2 2 sin A sin B
…x3 ÿ x2 † ‡ …y3 ÿ y2 † ˆ …y1 ‡ y2 ‡ y3 ÿ k† a0 ˆ c ; b0 ˆ c ; c0 ˆ c
sin…A ‡ B† sin…A ‡ B†
The other required correspondence between pP0 …y0 † A0 ˆ A; B0 ˆ B; C0 ˆ p ÿ …A ‡ B†
and Fy00 is naturally introduced through the common
element y0 used in the de®nitions of the ®bers The next step is to introduce a re®nement G of P
pP0 …y0 † 2 P0 and the corresponding ones Fy00 2 F0 , such that each ®ber of G intersects R…f † at most one
where F0 is thus a ®bering of R…g† complementary to element. Our choice is y 2 pG …y 0 † when
S. For given y0 2 R…f † the ®ber Fy00 depends only on the
a ˆ a0 ; b ˆ b0 ; c ˆ c0 ; A ˆ A0 ; B ˆ B0 ; C ˆ any
mapping f and not on the generalized inverse map-
ping g. Obviously y 2 pG …y 0 † ) y 2 pP …y 0 † and therefore
pG …y 0 †  pP …y 0 †, so that G is indeed a re®nement of P.
Example 2. We shall now give an example where the If y 0 2= R…f † then obviously for any y 2 pG …y 0 † also
generalized inverse is not de®ned directly but follows y2= R…f †, i.e. pG …y 0 † \ R…f † ˆ [: If y 0 ˆ y0 2 R…f † then
from the choice of ®berings and subspaces as well
as a ®bering mapping C, along the lines of Propo- sin A0 sin B0
a 0 ˆ c0 ; b0 ˆ c0 ;
sition 2. sin…A0 ‡ B0 † sin…A0 ‡ B0 †
We consider the same mapping f as in Example 1, C0 ˆ p ÿ …A0 ‡ B0 †
but we will choose a di€erent description of the range
R…f † and the solution spaces Fy , using this time c; A; B as and y 2 pG …y0 † when
the free parameters. In this case y ˆ ‰a b c A B CŠT
sin A0 sin B0
2 R…f † when a ˆ c0 ; b ˆ c0 ; c ˆ c0
sin…A0 ‡ B0 † sin…A0 ‡ B0 †
sin A sin B
aˆc ; bˆc ; c ˆ any A ˆ A0 ; B ˆ B0 ; C ˆ any
sin…A ‡ B† sin…A ‡ B†
A ˆ any; B ˆ any; C ˆ p ÿ …A ‡ B† in which case pG …y0 † \ R…f † ˆ fy0 g as required.
In order to choose a section M of G we must pick up
The conditions in this above description are in fact the a single element y from each ®ber pG …y 0 †. Since the ele-
well-known angle and sine conditions ments a; b; c; A; B are already determined from the equal
a b c ones of y 0 , we are left with the possibility of choosing
A ‡ B ‡ C ˆ p; ˆ ˆ one value of C for every ®ber of G. Our choice is
sin A sin B sin C
C ˆ p ÿ …A ‡ B† so that y 2 M when
We will introduce a ®bering P having R…f † as a section
by y 2 pP …y 0 † when a ˆ any; b ˆ any; c ˆ any
0 0 0
a ˆ any; b ˆ any; c ˆ c ; A ˆ A ; B ˆ B ; C ˆ any A ˆ any; B ˆ any; C ˆ p ÿ …A ‡ B†
0
If y 2 pP …y † \ R…f † then y must satisfy simultaneously M is obviously a section of G since y 2 M \ pG …y 0 † is
the conditions for y 2 pP …y 0 †: uniquely determined from
c ˆ c0 ; A ˆ A0 ; B ˆ B0 a ˆ a0 ; b ˆ b0 ; c ˆ c0 ; A ˆ A0 ; B ˆ B0 ; C ˆ p ÿ …A0 ‡ B0 †
and the conditions for y 2 R…f †:
The ®bering P induces a ®bering P0 on M with ®bers
sin A sin B pP0 …y0 † ˆ pP …y0 † \ M. Combining the conditions for
aˆc ; bˆc ; C ˆ p ÿ …A ‡ B† y 2 M with those for y 2 pP …y0 † we conclude that
sin…A ‡ B† sin…A ‡ B†
y 2 pP0 …y0 † when
which, combined, show that y is uniquely determined
from a ˆ any; b ˆ any; c ˆ c0 ; A ˆ A0 ; B ˆ B0 ;
C ˆ p ÿ …A0 ‡ B0 †
sin A0 sin B0
a ˆ c0 ; b ˆ c0 ; c ˆ c0
sin…A0 ‡ B0 † sin…A0 ‡ B0 † For the ®bering F of the solution spaces Fy …y 2 R…f ††
0 0 0 0
we need a representation which uses c; A; B as free
AˆA; BˆB; C ˆ p ÿ …A ‡ B † parameters: x 2 Fy whenever f …x† ˆ y; i.e.,
85

…x2 ÿ x1 †2 ‡ …y2 ÿ y1 †2 ˆ c2 x1 ˆ a; x2 ˆ a ‡ c0 ; y1 ˆ b; y2 ˆ b
x3 ˆ a ‡ b0 cos A0 ; y3 ˆ b ‡ b0 sin A0
…x3 ÿ x1 †…x2 ÿ x1 † ‡ …y3 ÿ y1 †…y2 ÿ y1 †
qq ˆ cos A Since Sa;b \ Fy00 ˆ fxg, the ®bering S is complementary
…x3 ÿ x1 †2 ‡ …y3 ÿ y1 †2 …x2 ÿ x1 †2 ‡ …y2 ÿ y1 †2 to F0 within R…g†:
In order to obtain ®nally an explicit description of
…x3 ÿ x2 †…x2 ÿ x1 † ‡ …y3 ÿ y2 †…y2 ÿ y1 †
qq ˆ cos B the generalized inverse mapping g : y ! x ˆ g…y†, let
…x3 ÿ x2 †2 ‡ …y3 ÿ y2 †2 …x2 ÿ x1 †2 ‡ …y2 ÿ y1 †2 y ˆ ‰a b c A B CŠT be an arbitrary element of Y . Then
y 2 Ra;b \ pP0 …y0 † when y0 ˆ p…y† is given by
For R…g† ˆ g…Y † we must choose a subset of X which
sin A sin B
has the same dimension 5 as M, since R…g† ˆ g…M† also. a0 ˆ c ; b0 ˆ c ; c0 ˆ c
Our choice is to set x 2 R…g† when sin…A ‡ B† sin…A ‡ B†

y2 ˆ y1 A0 ˆ A; B0 ˆ B; C0 ˆ p ÿ …A ‡ B†
The ®bering F0 induced by F on R…g† has ®bers and x ˆ g…y† is uniquely determined from x 2 Sa;b \ Fy00 .
Fy0 ˆ Fy \ R…g†. Thus taking y2 ˆ y1 into account, the This means that x ˆ g…y† if we replace a ˆ a; b ˆ b and
preceding conditions for x 2 Fy we obtain, after some the values of the elements of y0 in the expression giving
algebraic manipulation, that x 2 Fy0 when x 2 Sa;b \ Fy00 . This results in
y3 ÿ y1 y3 ÿ y2 x1 ˆ a; x2 ˆ a ‡ c; y1 ˆ b; y2 ˆ b
y2 ˆ y1 ; x2 ÿ x1 ˆ c; ˆ tan A; ˆ tan B
x3 ÿ x1 x3 ÿ x2
sin B sin B
Since for every y the ®ber pP0 …y† will be mapped onto the x3 ˆ a ‡ c cos A; y3 ˆ b ‡ c sin A
0 sin…A ‡ B† sin…A ‡ B†
®ber Fp…y† , it remains to introduce a ®bering R of M
which is complementary to P0 . We correspond to each thus bringing to an end Example 2.
pair of positive numbers a; b a ®ber Ra;b 2 R which is A special case of interest is the case of a minimal rank
de®ned by y 2 Ra;b when generalized inverse g where rg ˆ r. In this case
a ˆ a; b ˆ b; c ˆ any; A ˆ any; B ˆ any; rg ˆ dim…R…g†† ˆ r ˆ dim…R…f ††
C ˆ p ÿ …A ‡ B† which combined with S  R…g† implies that S ˆ R…g†:
In order to show that R is complementary to P0 (within
M), let y 2 Ra;b \ pP0 …y0 †. Combining the conditions for Lemma. When rg ˆ r the ®bering G induced by g
y 2 Ra;b with the ones for y 2 pP0 …y0 † we conclude that y coincides with the ®bering P induced by p ˆ f  g:
is uniquely determined from
Proof. Since S ˆ R…g†, the restriction gjR…f † of g to the
a ˆ a; b ˆ b; c ˆ c0 ; A ˆ A0 ; B ˆ B0 ; range of f is in this case a bijection between R…f † and
C ˆ p ÿ …A0 ‡ B0 † R…g†. For every z 2= R…f †; g…z† 2 R…g† ˆ S and since S is
a section of F, fg…z†g ˆ S \ Fy for a unique ®ber Fy
Therefore Ra;b \ pP0 …y0 † ˆ fyg and R is complementary corresponding to a unique y 2 R…f †. Therefore
to P0 . y ˆ f …g…z†† ˆ …f  g†…z† ˆ p…z† and z 2 pP …y† ˆ
One more step remains to complete the de®nition of pP ……f  g†…z††. Consequently the ®bering G induced by
the generalized inverse g of f , that is to introduce a g coincides with the ®bering P induced by p ˆ f  g. (
mapping
For the particular choice rg ˆ r, the last two re-
C : R ! S : Ra;b ! Sa;b quirements in Proposition 1 can be relaxed. The same
simpli®ed situation can arise in a di€erent way by re-
where S is a ®bering of R…g† complementary to F0 . quiring that in addition to g being a generalized inverse
We choose Sa;b ˆ C…Ra;b † by requiring that x 2 Sa;b of f , at the same time f is a generalized inverse of g.
when Repeating property (G1) by interchanging the roles of f
x1 ˆ a; y1 ˆ b; y2 ˆ b and g leads to the following de®nition.

Obviously y1 ˆ y2 and thus Sa;b  R…g†. If x 2 Sa;b \ Fy00 De®nition. A generalized inverse g of a given mapping f
then combining these conditions for x 2 Sa;b with those is called a re¯exive generalized inverse if in addition to
for x 2 Fy00 we obtain (G1) it satis®es the following condition, henceforth
x1 ˆ a; y1 ˆ b; y2 ˆ b denoted (G2):

y3 ÿ y1 y3 ÿ y2 gf gˆg …8†


x 2 ÿ x1 ˆ c0 ; ˆ tan A0 ; ˆ tan B0
x3 ÿ x1 x3 ÿ x2
These six conditions can be solved for the elements of x Lemma. A generalized inverse g of f is a re¯exive
which is then uniquely determined by generalized inverse of f if and only if rg ˆ r:
86

We choose P in such a way that for any given


y0 2 R…f † an element y 2 pP …y0 † when
cos A‡CÿB
2 cos C
a0 ˆ a; b0 ˆ a ; c0 ˆ a
cos B‡CÿA
2 cos B‡CÿA
2

p ÿ …A ‡ B ‡ C† p ÿ …A ‡ B ‡ C†
A0 ˆ A ‡ ; B0 ˆ B ‡ ;
2 2
C0 ˆ C
These equations describe also the projection y0 ˆ p…y†
for any given y.
For the corresponding ®ber Fy0 of F we need the
appropriate description x 2 Fy0 when

…x2 ÿ x1 †…x3 ÿ x1 † ‡ …y2 ÿ y1 †…y3 ÿ y1 †


q q
…x2 ÿ x1 †2 ‡ …y2 ÿ y1 †2 …x3 ÿ x1 †2 ‡ …y3 ÿ y1 †2

Fig. 6. The geometry of a re¯exive generalized inverse g of a nonlinear


ˆ ÿ cos B0
mapping f . All elements of pP …y† ˆ pG …y† are mapped into the same
element x^ 2 S and thus projected by p onto the same element y^
…x3 ÿ x2 †2 ‡ …y3 ÿ y2 †2 ˆ a20
…x2 ÿ x1 †2 ‡ …y2 ÿ y1 †2 ˆ c20
Proof. Necessity: an immediate consequence of g being a
generalized inverse of f is that R…g  f † ˆ R…g†, and The de®nition of the generalized inverse g is ®nalized
rg  min…r; rg †, so that rg  r  min…n; m†; which com- with the selection of the section S of F by x 2 S when
bined with r  rg from Eq. (5) implies that rg ˆ r:
Suciency: if r ˆ rg then dim…R…g†† ˆ dim…R…f †† ˆ x1 ‡ x2 ‡ x3 ˆ 0; y1 ‡ y2 ‡ y3 ˆ 0; x2 ˆ x3
dim…S† and S ˆ R…g†: If x 2 S then fxg ˆ S \ Ff …x† and For an arbitrary element y of Y the corresponding ®ber
g…f …x†† ˆ x. Let y be an arbitrary element of Y , then pP …y† ˆ pP …y0 † is determined from the projection
g…y† 2 R…g† ˆ S and setting g…y† in the place of the y0 ˆ p…y†. Since R…g† ˆ S while pP …y0 † is mapped onto
previous x we obtain g…f …g…y††† ˆ g…y† or …g  f  g† Fy0 , the image x ˆ g…y† must belong to both, x 2 S \ Fy0 .
…y† ˆ g…y† and since y is arbitrary g  f  g ˆ g. ( Combining the conditions for x 2 S and x 2 Fy0 , we
obtain a system of six equations in six unknowns
Proposition 3. A re¯exive generalized inverse g of f is
uniquely de®ned if the following are speci®ed: …x2 ÿ x1 †…x3 ÿ x1 † ‡ …y2 ÿ y1 †…y3 ÿ y1 †
q q
(1) The section S ˆ g…R…f †† ˆ R…g† of the ®bering F of …x2 ÿ x1 †2 ‡ …y2 ÿ y1 †2 …x3 ÿ x1 †2 ‡ …y3 ÿ y1 †2
X induced by f .
(2) The ®bering P to be induced by p ˆ f  g, i.e., all ˆ ÿ cos B0
the ®bers Py corresponding to every y 2 R…f †.
The question of what the ®bers Gx of a re¯exive …x3 ÿ x2 †2 ‡ …y3 ÿ y2 †2 ˆ a20 ; …x2 ÿ x1 †2 ‡ …y2 ÿ y1 †2 ˆ c20
generalized inverse are, is answered by the following
lemma. x1 ‡ x2 ‡ x3 ˆ 0; y1 ‡ y2 ‡ y3 ˆ 0; x2 ˆ x3

Lemma. For a re¯exive generalized inverse g, the …f  g†- The solution is easily veri®ed to be
induced ®bers are identical to the g-induced ®bers, i.e.,
P ˆ G: x1 ˆ 23 c0 sin B0 ; y1 ˆ ÿ 13 a0 ‡ 23 c0 cos B0

Proof. Consider any y 2 R…f †: z 2 pP …y† ) y ˆ …f  g† x2 ˆ ÿ 13 c0 sin B0 ; y2 ˆ ÿ 13 a0 ÿ 13 c0 cos B0


…z† ) g…y† ˆ …g  f  g†…z† ˆ g…z† ) z 2 pG …y† ) pP …y†
 pG …y†. But since G is a re®nement of P, it must also
hold that pG …y†  pP …y† and thus pG …y† ˆ pP …y† for any x3 ˆ ÿ 13 c0 sin B0 ; y3 ˆ 23 a0 ÿ 13 c0 cos B0
y 2 R…f †. Therefore G ˆ P. (
These expressions give in fact not g but its restriction
gjR…f † of g to R(f). This can be completed to g ˆ gjR…f †  p
Example 3. We shall give an example of a re¯exive from the known projection mapping p, by simply re-
generalized inverse which is de®ned indirectly through placing B0 ; a0 ; c0 with their expressions in terms of
the choice of the ®bering P  G having R…f † as a section a; b; c; A; B; C. Thus we obtain the explicit form x ˆ g…y†
and the section S of the ®bering F. of the re¯exive generalized inverse g
87

cos AÿB‡C
2
De®nition. The unique re¯exive generalized inverse g of
x1 ˆ 23 a cos C ; a given mapping f which is also a minimum distance
cos B‡CÿA
2
and x0 -nearest generalized inverse is called the pseudo-
sin AÿB‡C
2 inverse of f .
y1 ˆ ÿ13a ‡ 23a cos C
cos B‡CÿA
2 Various other nonunique generalized inverses can be
de®ned by combination of (G1) with the some of the
cos AÿB‡C
2
other properties (G2), (G3), and (G4). The inverses
x2 ˆ ÿ13a cos C ; within the same class, i.e., the ones satisfying the same
cos B‡CÿA
2 set of properties, may di€er in the following aspects:
sin AÿB‡C
2
y2 ˆ ÿ13a ÿ 13a cos C (1) In the section S when (G4) is not satis®ed.
cos B‡CÿA
2 (2) In the ®bering P induced by p ˆ f  g when (G3) is
not satis®ed.
cos AÿB‡C
2
(3) In the re®nement G of P and the mapping
x3 ˆ ÿ13a cos C ; c…y; z† ˆ cy …z† extending g outside R…f † when (G2) is
cos B‡CÿA
2 not satis®ed.
sin AÿB‡C
2
y3 ˆ 23a ÿ 13a cos C ( In the geodetic case property (G3) is a ``must'', since
cos B‡CÿA
2 it solves the adjustment problem. Re¯exivity (G2) is not
necessary but convenient. The section S is speci®ed ei-
For re¯exive generalized inverses we have a possibi- ther directly by a set of minimal constraints or by in-
lity of choices, corresponding to the choices of the sec- troducing (G4). (G4) is introduced, either directly
tion S of F and of the ®bering P of Y over the elements (pseudo-inverse solution) or indirectly by a set of inner
of R…f †. In analogy to the uniquely de®ned pseudo-in- constraints which describe the particular section S cor-
verse of a linear operator we seek two conditions, ad- responding to (G4).
ditional to (G1) and (G2), such that (G3) speci®es the
®bering P and (G4) speci®es the section S of F. De®nition. A set of (nonlinear) equations v…x; d† ˆ 0, is
We introduce the minimum-distance generalized in- called a set of minimal constraints (with respect to f ) if
verse. Let qY be the distance function of the metric space the corresponding mapping v : X  Rmÿr ! Rmÿr gives
Y . For any ®xed element y^ 2 R…f † de®ne rise to a ®bering H of X , such that its member
Hd ˆ fx 2 Hd †jv…x; d† ˆ 0g, corresponding to a ®xed d,
Ry^ ˆ fz 2 Y ; qY … z; y^† ˆ min qY …z; y†g …9† is a section of the ®bering F induced by f .
y2R…f †

Eq. (9) we name (G3). In the particular case that the De®nition. A set of minimal constraints h…x† ˆ d are
subsets Ry^ are ®bers of Y , as y^ varies over R…f †, then we called inner constraints (with respect to a given point x0
can employ the following de®nition: g is a minimum of X ) when the ®ber Hd coincides with the section of F
distance generalized inverse of f , if P ˆ R, i.e., when the speci®ed by the property (G4), i.e., when for every
®bering P induced by p ˆ f  g is identical with the Fy 2 F the unique element x such that Fy \ Hd ˆ fxg is
®bering R with ®bers Ry^. the one closest to x0 among all elements of Fy .
If Y is an inner-product vector space with metric
induced by a quadratic norm, we may use the term Remark. In the special case that f and its generalized
``least squares'' in place of ``minimum distance''. inverse are linear or ane mappings, R…f †, the ®bers
We now de®ne the x0 -nearest generalized inverse. Fy ; Py ; Gx ; Qx , and the section S are all ane subspaces
Let qX be the distance function of X ; x0 2 X be ®xed and of the corresponding spaces Y and X . The ®bers
let F; P; G; Q consist of parallel ane subspaces, i.e., they
xFy ˆ min qX …x0 ; z† …10† are equivalent to quotient spaces of Y and X . Each
z2Fy quotient space is uniquely determined by its member
passing through zero, i.e., by their ``modulo'' linear
This equation we denote (G4). In the particular case subspace in X or Y (Halmos 1974, Sect. 21) . The ®ber
that there exists only one element on every Fy which F, e.g., is equivalent to the quotient space of ane
satis®es (G4), then the set of all xFy constitutes a section subspaces parallel to the null space of f . Thus the linear
S of F, and we can employ the following de®nition: g is of ane generalized inverses are determined by a set of
an x0 -nearest generalized inverse of f if S is taken as the linear subspaces, instead of the ®berings (see Teunissen
image of R…f † under g, i.e., S ˆ g…R…f ††: 1985). By the way, there is no reason to restrict the class
of generalized inverses of a linear operator to be itself
Remark. If X is an inner-product vector space with linear, as is usually done within the linear theory. A
metric induced by the norm, while x0 ˆ 0, we may use linear operator may well have a nonlinear generalized
the term ``minimum norm'' in place of ``0-nearest''. inverse. A geodetic example is the ane generalized
However in the geodetic case 0 is a ``prohibited'' point, inverse obtained implicitly by imposing a set of inho-
since it corresponds to a network with all its points mogeneous linear constrains Hx ˆ d; d 6ˆ 0; on the
coinciding. solution of the least-squares adjustment problem.
88

The approach followed here can be characterized as a contradiction: let y be a point on the geodesic joining
naive, and falls short of a proper mathematical theory of y0 with y^2 such that y 2 BR …y0 †, in which case
generalized inverses of nonlinear mappings, even in the q…y0 ; y^2 † ˆ q…y0 ; y† ‡ q…y; y^2 † and q…y; y^1 † ˆ q…y; y^2 †
relatively simple case of ®nite-dimensional spaces. The ) q…y0 ; y^1 † ˆ q…y0 ; y† ‡ q…y; y^1 †, i.e., y belongs also to
reason is that the assumptions made are too strong for the geodesic joining y0 and y^1 , which is obviously im-
the purpose of building which to build a general theory, possible.
a fact which will become obvious even in the simple For the de®nition based on (G4) we can ascertain
geodetic case. Some of the assumptions that do not that the set of problematic points forms a subset of X
generally hold are the following: with measure zero, provided that the ®bers Ff …x† do not
coincide locally with the boundaries of balls having x0 as
(1) f may not be de®ned at all points of the space X but center.
only on an open subset U  X . In any case our de®nitions hold almost everywhere in
(2) g may not be possible to de®ne at all points of the X and Y , i.e., except on subsets of measure zero. From a
space Y but only on an open subset V  Y : physical point of view we can always work in a neigh-
(3) The minimum distance property (G3) may not suf- borhood of X and the corresponding neighborhood of Y
®ce for the determination of a ®bering of Y or even which do not contain critical (problematic) points.
V . This has to do with the di€erential geometric
properties of R…f † as a submanifold of Y and in
particular with its curvature tensor. There might be 4 The geodetic nonlinear mapping
points in V such that their minimum distance from
R…f † does not correspond to a unique point of R…f † The nonlinear mapping f in the geodetic case arises
but either on a set of discrete points or even a non from n individual mappings fk ; k ˆ 1; . . . n, which map
discrete subset of R…f †. (Think, e.g., of the case coordinates of points into various types of observables.
where R…f † is a ball in Y and consider its center We consider a geodetic network of N points with
which belongs to any ®ber induced by the minimum Cartesian coordinates xi ; i ˆ 1; . . . ; N , where xi is a
distance principle.) vector of dimension either 2 (plane networks) or 3
(4) The x0 -nearest property (G4) may not suce for the (spatial networks). These vectors constitute a coordinate
determination of a ®bering of X or even U . This has vector
to do with the di€erential geometric properties of 2 3
the ®bers Fy as submanifolds of X . There might be x1
one or more ®bers Fy on which there is more than 6 .. 7
xˆ4 . 5 …11†
one point attaining the minimum distance from x0 .
xN
(5) The rank of f (=rank of dfx ) may not be the same at
all points x of U . The consequence is that R…f † is no of dimension 2N or 3N , which is considered an element
longer a smooth submanifold of Y and its behavior of the m-dimensional Euclidean space (m ˆ 2N or
at singular points (points with rank (dfx † < r) may m ˆ 3N )
pose additional problems.
ÿ N ÿ N
X ˆ E2N ˆ E2 or X ˆ E3N ˆ E3
We shall keep these problems in mind when study-
ing the particular case of the mappings f arising in equipped with the simple inner product …xa ; xb † ˆ xTa xb .
geodetic problems. The geodetic observables we will examine here can be
The last three problems have to do with points in Y distinguished as ``angular'' observables yijk and ``dis-
or X which do not behave ``nicely'' with respect to the tance'' observables dik , which di€er with respect to their
de®nitions employing (G3), (G4), and the de®nition of invariance characteristics under coordinate transforma-
the rank r of f , r ˆ rank…dfx ). Starting from the latter, tions. Angular observables are invariant under similarity
we note that the set of points y ˆ f …x† 2 R…f †, for which coordinates transformations x0 ˆ S…x† while distance
rank…dfx † < r, forms within R( f ) a set of measure zero. observables are invariant under rigid coordinate trans-
This is in fact a consequence of Sard's theorem (Stern- formations x0 ˆ R…x†, which are de®ned pointwise by
berg 1964, ch. II.3, Theorem 3.1), provided we impose
some mild assumptions on the di€erentiability of the x0i ˆ S…xi † ˆ k R xi ‡ t; x0i ˆ R…xi † ˆ R xi ‡ t …12†
manifolds X and Y . In our case, where dim Y > dim X , it
suces to require that they are C 1 manifolds and f is t is a displacement vector, k > 0 a scale parameter and R
also a mapping of class C 1 . a proper orthogonal matrix …RT R ˆ R; RT ˆ I and
jRj ˆ 1) depending on a single rotational parameter h
A similar fact holds for the ``problematic'' points of in the two-dimensional case or on three rotational
Y which happen to belong to more than two sets Ry^ parameters h1 ; h2 ; h3 in the three-dimensional case. The
which thus intersect and fail to be ®bers of Y . If such observables are functions of the coordinates
points form a subset A ˆ Ry^1 \ Ry^2  Y which is not of
measure zero, then it will be possible to ®nd a y0 2 A and yijk ˆ y…xi ; xj ; xk † ˆ yijk …x†
a positive constant R such that BR …y0 †  A, where BR …y0 † dik ˆ d…xi ; xk † ˆ dik …x† …13†
is the ball with center y0 and radius R. We shall show
that the assumption that A has nonzero measure leads to which have the invariance properties
89

y…xi ; xj ; xk † ˆ y…S…xi †; S…xj †; S…xk †† Since all Dik are closed, the same holds for D ˆ V C and
…14† V is an open subset of X .
d…xi ; xk † ˆ d…R…xi †; R…xk ††
Turning to the range space Y we note that distances
or are mappings from X to R‡ , since they obtain only
yijk …x† ˆ yijk …S…x††; dik …x† ˆ dik …R…x†† …15† positive values, while angles are mappings from X to the
unit circle S. Thus, strictly speaking, f is a mapping
The mapping f which maps the network coordinates x n2
to the observables y ˆ f …x† 2 R…f †  Y consists in f : V ! …S†n1  …R‡ † …20†
general of n1 angular mappings and n2 distance map- However it is standard practice to replace S with an
pings …n1 ‡ n2 ˆ n† and we can distinguish two cases interval of R, our choice being I ˆ …0; 2pŠ, in which case
with respect to its invariance properties. When n2 ˆ 0 f becomes a mapping
(only angular observations) then
n
f …x† ˆ f …S…x†† …16† f : V ! U  …I†n1  …R‡ † 2  Y ˆ En1 ‡n2 ˆ En …21†

while when n2 6ˆ 0 (observations of distances or both Since I does not contain its limit point 0 and the same
distances and angles) then holds true for R‡ , U is an open subset of Y . We have
assumed that Y ˆ En is Rn equipped with the euclidean
f …x† ˆ f …R…x†† inner product, though a more general inner product
…ya ; yb † ˆ yTa Pyb may be used, in which case Y ˆ EPn .
Remark. We must emphasize here that f is a nonlinear The restriction of f from a mapping f : X ! Y to a
mapping and S or R are general transformations not mapping f : V ! U , poses some serious problems as far
necessarily close to the identity (represented by as the properties (G3) and (G4) of generalized inverses
k ˆ 1; R ˆ I; t ˆ 0†. It has been pointed out by Grafa- are concerned. Now the range R…f † is restricted from
rend and Kampmann (1996) that there are other groups f …X † to U \ f …X † and further to U \ f …V †. The ad-
of transformations, such as the ten-parameter conformal justment problem may have a solution y^ 2 f …X † closest
group G ˆ C10 …3†, in three dimensions; although it does to a given y 2 U , such that y^ 2 = U \ f …V †. This means
not satisfy Eq. (16), i.e., f …x† 6ˆ f …G…x††, when it is close that adjusted observations y^ may include negative or
to the identity (G  Id), it satis®es the corresponding zero distances, or angles outside the prescribed interval.
relation dfx0 …x† ˆ dfx0 …G…x†† for the di€erential map- Turning to the domain V we must distinguish be-
ping dfx0 of f at any Taylor point x0 . In other words tween the similarity transformation and the rigid
when G is close to the identity (``small'' transformation) transformation case. When distances are observed and
it leaves linearized angular observations invariant, a y^ 2 f …X †, the ®bers Fy^ may be initially de®ned with the
property shared of course with S. Therefore the C10 …3† help of any element x 2 X , such that ^ y ˆ f …x0 † by means
group, although of essentially no relevance to the strictly of
nonlinear approach followed here, should be taken into
consideration in numerical applications, where we have Fy^ ˆ fx 2 X jx ˆ R…x0 †g …22†
to resort, explicitly or implicitly (iterative solutions), to
linearization and solutions x; x0 close to x0 , in which This means that if x0 2 V the same holds for any
case the transformation from any such x to any such x0 x ˆ R…x0 † and therefore Fy^  V . Thus Fy^ is either
is necessarily close to the identity. included in V or lies completely outside V . This means
The presence of angular observations has as a con- that the problem of ®nding an element x^ of a ®ber closest
sequence that f is not de®ned at every point of X . For to a given element x0 2 V has a solution in V provided
an angular observable yijk ˆ y…xi ; xj ; xk ) to be de®ned it that Fy^  V , i.e., that y^ 2 f …V †. When distance mea-
is necessary that the relevant network points are distinct, surements are not included the similarity transformation
i.e., we must exclude from X points x with xi ˆ xj , or S gives rise to ®bers
xj ˆ xk , or xk ˆ xi . Although the same problem does Fy^ ˆ fx 2 X jx ˆ S …x0 †g …23†
not appear in the case of distance observations, we will
stick to the restriction that those network points which which are no longer closed subsets of X , because they
are joined by observations must be distinct. Let J be the ``converge'' at the zero element 0 2 X . Although the
index set of those pairs …i; k† of point indices corre- scale is restricted to positive values k > 0, it is possible to
sponding to points joined by observations and let Dik be consider a sequence ki ! 0 , giving rise to a sequence of
the diagonal subspace of X de®ned by similarity transformations Si such that
xi ˆ Si …x0 † 2 Fy^ ! 0 2
= Fy^ (convergence in norm). As a
Dik ˆ fx 2 X jxi ˆ xk g …18† consequence, the problem of ®nding an element x^ of a
We must exclude all inappropriate diagonal subspaces, ®ber closest to a given element, x0 2 V may have no
which leaves as the domain of de®nition of f the open solution (consider the case where 0 is the closest to x0
subset V  X de®ned by its complement V C with respect element from the closure F y^ of the ®ber Fy^).
to X These are problems of great concern to the mathe-
matician, but of little or no concern at all to the geo-
VC ˆ [ Dik ˆ D …19† desist. The reason is that in geodetic applications we are
…i;k† 2 J con®ned to a small neighborhood Ny  U \ f …V † of the
90

true observables y 2 U \ f …V †, such that the observa- the ®xed coordinates and azimuth, we specify the value
tions y and the adjusted observations y^ also belong to of the parameter set d. The ®rst step of this procedure
Ny . In the same way it is possible to choose an element corresponds to the choice of a ®bering S complemen-
x0 2 V such that Fy^ crosses a small neighborhood tary to F, where each ®ber corresponds to a particular
Nx0  V of x0 and furthermore the element x^ of Fy^ set of values for d. In the second step the assignment of
closest to x0 also lies in the same neighborhood Nx0 . speci®c values for d picks up a particular element S from
Although we con®ne ourselves here to the deter- S which will serve as the section of F into which g will
ministic aspects of generalized inverses, we are tempted map R…f †.
to say that such a restriction to neighborhoods can be A particular choice for S is as the set of points x such
also justi®ed when a probabilistic criterion is used for that if S \ Fy ˆ fxg then x is the closest point to a given
the ``optimal'' choice of a generalized inverse, which x0 2 V among all elements of Fy . The description of this
provides an estimator of x based on the observations y particular section by the corresponding set of minimal
(outcomes of random variables). In this case a Bayesian constraints (inner constraints) is not as easy as in the
point of view can be followed with a noninformative linear case. Instead we shall follow an approach similar
prior distribution for x, such that it is homogeneous to that of Baarda's S-transformation in the linear case,
within the desired neighborhood in X , and zero outside. where a known reference ``minimal'' solution is trans-
For a glimpse into the possible connection between formed into the desired ``inner'' solution, i.e., the x0 -
the deterministic approach taken here and probabilistic nearest solution.
aspects, consider the case where y is the outcome of a In order to proceed with the solution we need an
random variable Y with likelihood function L…y; x† ˆ analytical description of the solution ®bers as subman-
pY …yjx†. If it so happens that L has the form L…x; y† ˆ ifolds of X , e.g., by appropriate curvilinear coordinates.
L…q…y; f …x††† where L…q† has a maximum when q is Let G denote the group of applicable transformations
minimized, while q quali®es as a metric, then maximum (i.e., either similarity or rigid) with elements g : X ! X .
likelihood estimate can be identi®ed with the solution If z belongs to a speci®c ®ber z 2 Fy then the same holds
following from the use of a minimum-distance general- true for x ˆ g…z† 2 Fy for every g 2 G.
ized inverse. This establishes for every ®ber Fy of X a natural
With these considerations in mind we turn to the correspondence h : Fy  Fy ! G :
nonlinear datum problem. We assume that the adjust-
ment problem has already been solved and its solution y^ g ˆ h…z; x† , x ˆ g…z† …24†
de®nes a speci®c ®ber Fy^  V  X , described by either This means that by ®xing one of the arguments of h
Eq. (22) or Eq. (23). we can obtain a mapping hz : x ! hz …x† h…z; x† :
Fy ! G which is invertible. If further fG : G ! Rd is a
coordinate system on G (i.e., a parametrization of G),
5 Solution to the nonlinear geodetic datum problem the composite mapping
The datum problem can be viewed as the problem of f  fG  hz : x ! p…x† : Fy ! Rd …25†
completing the projection mapping p : Y ! R…f † which
solved the adjustment problem into a (minimum-dis- establishes a coordinate system on Fy . We have already
tance) generalized inverse g ˆ c  p of f , with the choice introduced the coordinate systems f and the corre-
of a complementary mapping c : R…f † ! S, where S is a sponding coordinates p in the descriptions, Eq. (12), of
section of the ®bering F induced by f in V . In fact it is the similarity and rigid transformations. The dimension
sucient to determine only S, in which case c (and thus d of Fy depends on the applicable transformation group
g) is automatically de®ned as the mapping of any and the dimension of the network. For two-dimensional
y 2 R…f † into c…y† ˆ x, where x is the unique element of networks d ˆ 4 for the similarity case (coordinates
S \ Fy . One way of describing the d-dimensional man- h; t1 ; t2 ; k) and d ˆ 3 for the rigid case (coordinates
ifold S is by means of a set of d ˆ m ÿ r minimal h; t1 ; t2 ). For three-dimensional networks d ˆ 7 for the
constraints v…x; d† ˆ 0, where d is a ®xed vector of d similarity case (coordinates h1 ; h2 ; h3 ; t1 ; t2 ; t3 ; k) and
parameters, which usually appear in the explicit form d ˆ 6 for the rigid case (coordinates h1 ; h2 ; h3 ; t1 ; t2 ; t3 ).
h…x† ˆ d. The alternative possible description of S by the In fact we have introduced not one but a family of
m constraints x ˆ /…d; q†, where d is again a ®xed vector coordinate systems depending on the ``reference point'' z
of d parameters and q a vector of r free parameters, is chosen for any ®ber Fy . This choice can be made by
hardly used in geodesy. introducing a section Z of the ®bering F of X , e.g., by
Strictly speaking, a set of minimal constraints con- the use of minimal constraints which are easy to handle
sists of two things. First a family of mappings v…x; †, or in computations, e.g., of the form xik ˆ 0 (trivial
/…; q†, which corresponds to the choice of the type of constraints), where k is the network point index and i
constraints, and second a set of speci®c values for the the coordinate index (i ˆ 1; 2 for plane and i ˆ 1; 2; 3 for
parameters d. For example, in a horizontal network we three-dimensional networks).
may decide to ®x the two coordinates of a particular Once a reference solution z is established, all other
point and the azimuth of a particular side and thus solutions in the same ®ber x ˆ x…z; p† depend on the
choose the function v…x; † (or h…x†, or /…; q†, analo- coordinates p which are identical to the parameters of
gously). When we decide on the values to be assigned to the transformation g…p† : z ! x. To obtain the solution
91

speci®ed by a given set of d minimal constraints three-dimensional networks and similarity or rigid
v…x; d† ˆ 0 (usually of the form h…x† ˆ d† we have to transformations. The results are:
solve a system of d nonlinear equations with d un- Three-dimensional networks ± similarity transforma-
knowns tion:
P
v…p† ˆ v…x…z; p†; d† ˆ 0 …26† …x0i ÿ x0 †T R…zi ÿ z†
or in the usual form in geodesy xi ˆ x 0 ‡ i P R…h†…zi ÿ z† …30†
…zi ÿ z†T …zi ÿ z†
i
h…p† ˆ h…x…z; p†† ˆ d …27†
where
The solution p of the preceding system speci®es the
transformation gp ˆ g…p† which maps the reference 1X 1X
solution z into the minimal constraint solution z  zi ; x0  x0i …31†
N i N i
x ˆ gp …z†. This transformation is called the nonlinear
Baarda S-transformation in the similarity, and the and h is the solution of the three nonlinear equations
nonlinear Baarda R-transformation in the rigid case.
The solution x closest to a given element x0 2 X is 1X
‰…x0i ÿ x0 †ŠR…h†…zi ÿ z† ˆ 0 …32†
obtained either by minimizing N i
F …p† ˆ ‰x…z; p† ÿ x0 ŠT ‰x…z; p† ÿ x0 Š …28† Three-dimensional networks ± rigid transformation:
or by imposing the condition that the vector x0 ÿ x is xi ˆ x0 ‡ R…h†…zi ÿ z† …33†
orthogonal to the tangent space to the ®ber Fy^ at the
point x. Following the ®rst approach we are led to a where h is the solution of the three nonlinear equations
system of d nonlinear equations with d unknowns 1X
 T ‰…x0i ÿ x0 †ŠR…h†…zi ÿ z† ˆ 0 …34†
oF ox N i
…p† ˆ 0 ) …z; p†‰x…z; p† ÿ x0 Š ˆ 0 …29†
op op
Two-dimensional networks ± similarity transformation:
The solution p^ of this system speci®es the transfor- P
mation gp^ ˆ g…^ p† which maps the reference solution z …x0i ÿ x0 †T R…zi ÿ z†
i
into the solution x ˆ gp^…z† closest to x0 . gp^ is the non- xi ˆ x0 ‡ P R…#†…zi ÿ z† …35†
linear Baarda S-transformation (or R-transformation) to …zi ÿ z†T …zi ÿ z†
i
the x0 -nearest solution corresponding to the inner so-
lution of Meissl in the linear case. where # is the solution of the single nonlinear equation
What about the famous inner constraints of Meissl
which appear in the linear case? Well, oF 1X oR
op ˆ 0 can be …x0i ÿ x0 †T …zi ÿ z† ˆ 0 …36†
solved in principle to give p ˆ p…x0 ; z†, which substituted N i o#
 
in x ˆ ‰g…p†Š…z† ˆ x…z; p† gives an expression of the form cos # sin #
x ˆ x…x0 ; z† ˆ x…z†, since x0 is ®xed a priori. Recalling When R…#† ˆ
that z is determined by the intersection of a solution  ÿ sin# cos #
1 b a
®ber Fy 2 F with a section S0 of F determined by a set xi ˆ x0 ‡ 2 …zi ÿ z† …37†
of minimal constraints v…z; d† ˆ 0 or z ˆ /…d; q†, it is sz ÿa b
possible in principle to have a description of the r-di- where
mensional manifold S0 in the form z ˆ z…q†, where q is a  
vector of r free parameters serving in fact as coordinates 1X 0 1
for S0 . If this representation is replaced in the inner a ‰…x0i ÿ x0 †T …zi ÿ z† …38†
N i ÿ1 0
solution x ˆ x…z† we obtain x ˆ x…z…q†† ˆ w…q†. The
resulting relation x ˆ w…q† is the set of inner constraints 1X
which to every q assigns the element of the ®ber b ‰…x0i ÿ x0 †T …zi ÿ z† …39†
N i
pF …z…q†† of F passing through z ˆ z…q† its element
closest to x0 . Unlike the linear case, the inner constraints 1X
are not independent of the reference minimal constraints s2z  ‰…zi ÿ z†T …zi ÿ z† …40†
because they depend on their parametrization in terms N i
of q. The above description of the inner constraints is a
Two-dimensional networks ± rigid transformation:
conceptual rather than a practical one since the analyt-
ical solution of nonlinear equations or the derivation of xi ˆ x0 ‡ R…#†…zi ÿ z† …41†
desired alternative descriptions of sections are not pos-
sible as a rule. where # is the solution of the single nonlinear equation
The detailed derivation of the speci®c form of the 1 Xh oR i
nonlinear system given by Eq. (29) is given in Appendix …x0i ÿ x0 †T …zi ÿ z† ˆ 0 …42†
A for the four special cases corresponding to two- or N i o#
92
 
cos # sin #
When R…#† ˆ F0 FT0 vi ˆ l2i vi ) F0 FT0 VMVT …50†
ÿ sin  # cos#
1 b a V ˆ ‰v1 v2 . . . vn Š; Mik ˆ dik l2i
xi ˆ x0  p …zi ÿ z† …43†
2
a ‡b 2 ÿa b
The eigenvectors fui g; fvi g form orthonormal systems
where a and b are de®ned as in the previous case. and they can be seen as the representations with respect
to the initial bases of the new bases, in which case U and
V become the matrices of change of bases, already
6 Application to the linear case introduced with the same notation. Furthermore the
number of non-zero eigenvalues is r ˆ rank…f † for both
We have outlined an approach to the representation of k2i and l2i , which are also identical, i.e., l2i ˆ k2i ;
generalized inverses of nonlinear operators, and the i ˆ 1; 2; . . . r. In this case, if K is the r  r diagonal
question arises how this approach agrees with the matrix with elements
existing representation theory for (linear) generalized
inverses of linear operators (Takos 1976; Teunissen Kik ˆ dik ki …51†
1985). Since linear operators are simply special cases of the representations with respect to the new bases are
nonlinear operators we will proceed to show that the  
existing theory can be derived as a special case of our K 0
nonlinear case. FT F ˆ UT …FT0 F0 †U ˆ L ˆ …52†
0 0
If f : X ! Y is a linear operator, we seek a linear
 
operator g : Y ! X which is a generalized inverse of f , T T T K 0
i.e., it satis®es property (G1). In terms of given ``initial'' FF ˆ V …F0 F0 †V ˆ M ˆ …53†
0 0
orthonormal bases feX Y0
i g for X and fei g for Y , f is
0

represented by an n  m matrix F0 and we seek the m  n  


T K 0
matrix G0 representing g. We call these bases initial F ˆ V F0 U ˆ …54†
0 0
because we are going to carry out our investigation in
terms of ``new'' more convenient orthonormal bases
The simplicity of this representation is the key to the
feXi g, feYi g, with respect to which f and g are repre-
speci®c choice of the new bases. This is essentially the
sented by matrices F and G, respectively. It is well
same representation as in Takos (1976), since if the
known that if U and V are the orthogonal matrices of
normality requirement is removed we can obtain a
the change of bases, i.e.,
representation with respect to a new orthogonal (but not
X
m X
n necessarily orthonormal) base
eXi ˆ Uik eXk 0 ; eYi ˆ Vik eYk 0 …44†  ÿ1=2     
kˆ1 kˆ1
F0 ˆ K 0 VT F U Kÿ1=2 0 ˆ
Ir 0
0
0 Inÿr 0 Imÿr 0 0
it holds that
…55†
F ˆ VT F0 U; G ˆ UT G0 V …45†
We shall stick to the representation of Eq. (54) and will
F0 ˆ V F U T ; G0 ˆ U G VT …46† relate the section S, the ®berings P, its re®nement G
which are needed for the determination g, to the
For vectors x 2 X and y 2 Y it holds for their represen- submatrices of
tations that    ÿ1 
T H K H
x ˆ U T x0 ; y ˆ VT y0 …47† Gˆ ˆ …56†
J K J K
x0 ˆ Ux; y0 ˆ Vy …48†
where T ˆ Kÿ1 follows directly from the property (G1):
The choice of the new bases will be based on the so- FGF ˆ F.
called singular value decomposition (Rao and Mitra From the discussion in Sect. 2 it is obvious that in the
1971, p.6) where fexi g and feyi g are the eigenvectors of linear case the relevant ®berings consist of ane sub-
the mappings …f   f † : X ! X and …f  f  † : Y ! Y ; spaces which are parallel translates of ``generating''
respectively, where f  : Y ! X is the adjoint mapping linear subspaces. They can be identi®ed with the concept
of f . Recall that if …; †X and …; †Y are the inner products of quotient spaces. A quotient space of a linear space Z is
in X and Y then f  is de®ned by …y; f …x††Y ˆ …f  …y†; x†X a space Z=M; M being a subspace of Z, consisting of
for every x 2 X and y 2 Y (see, e.g., Halmos 1974, equivalent classes of elements of Z, where two elements
Sect. 44 and Sect. 68). In terms of the initial bases f  is are equivalent, x1  x2 , if x1 ÿ x2 2 M. The ®bering
represented by FT0 ; …f   f † by FT0 F0 ; …f  f  † by F0 FT0 P ˆ Y =C is generated by a subspace C, its re®nement
while the eigenvector problems are expressed by G ˆ Y =N …g† by N …g† and the ®bering Q ˆ F ˆ X =N …f †
by N …f †. The determination of g is completed by its
FT0 F0 ui ˆ k2i ui ) FT0 F0 ˆ ULUT …49† action from C to the subspace D ˆ g…C†.
Let us note ®rst that the idempotent mappings
U ˆ ‰u1 u2 . . . um Š; Lik ˆ dik k2i p ˆ f  g and q ˆ g  f are in this case linear projections
93
 
ÿKH
on R…f † and S, respectively; p is represented by the basis for R…f † and the columns of a basis for C.
If
matrix product FG and q by GF, which explicitly are  
     I JK
K 0 Kÿ1 H I KH Since X ˆ S  N …f † the columns of r will rep-
FG ˆ ˆ r …57† 0 Id
0 0 J K 0 0   
I JK x1
 ÿ1     resent a basis for X and we may set x ˆ r .
K H K 0 Ir 0 0 Id x2
GF ˆ ˆ …58†  
J K 0 0 JK 0 Ir ÿKH
Since Y ˆ R…f †  C, the columns of will
  0 If
Ir represent a basis
Lemma. The subspace R…f † is represented by R    for Y and we may set
0 Ir ÿKH y1
  yˆ .
0 0 If y2
while N …f † is represented by R .
Id
Upon replacing these representations in x ˆ Gy we
Proof. x 2 N …f † , Gx ˆ 0 , obtain
          
K 0 x1 Kx1 Ir JK x1 Ir ÿKH y1
ˆ ˆ 0 , x1 ˆ 0 ˆG and
0 0 x2 0 0 Id x2 0 If y2
             
0 0 0 x1 Ir JK ÿ1 Ir ÿKH y1
,xˆ ˆ x2 , x 2 R : ˆ G
x2 Id Id x2 0 Id 0 If y2
 " ÿ1 #  
y 2 R…f † , 9 x 2 Rm : y ˆ Fx , Ir 0 K H Ir 0 y1
         ˆ
K 0 x1 Kx1 Ir Ir ÿJK Id J K 0 If y2
yˆ ˆ ˆ Kx1 2 R
0 0 x2 0 0 0 and after carrying out the computations
(    ÿ1  
x1 K 0 y1
ˆ …59†
x2 0 Q y2
Lemma. The section S ˆ g…R…f †† is represented by the
  where
I
range R .
JK Q  K ÿ JKH …60†
Proof. If x^ 2 S ˆ R…q† there exists x 2 X such that From the explicit form of Eq. (59) x1 ˆ Kÿ1 y1 ; x2 ˆ Qy2
x^ ˆ q…x† or it follows that the matrix Q determines the action of g on
     C, since if y 2 C then y2 ˆ 0; x1 ˆ 0 and
Ir 0 x 1 x1
x^ ˆ GFx ˆ ˆ        
JK 0 x2 JKx1 0 0 0 0
    xˆ ˆ ˆ Qy2 2 R
I I x2 Qy2 Id Id
ˆ x1 2 R (
JK JK The subspace D ˆ g…C† is represented by R…D†; D being
any of the m  f matrices given by
 ÿ1    
K H ÿKH 0
Lemma. The subspace C  Y de®ned by C ˆ N …p† Dˆ Tˆ T …61†
  J K If K ÿ JKH
ÿKH
ˆ N …f  g† is represented as the range R . where T is any f  f nonsingular matrix.
If
Summarizing these results we may say that the sub-
matrix J determines the section S, the submatrix H de-
Proof. y 2 C , p…y† ˆ …f  g†…y† ˆ 0 , FGy ˆ 0 termines the subspace C ˆ N …p† and thus the ®bering P
       and the remaining submatrix K determines the subspace
Ir KH y1 y1 ‡ KHy2 0
, ˆ ˆ D ˆ g…C†. The linear projection p represented by FG is a
0 0 y2 0 0 projection on R…f †, represented by R…F†, along the
 
, y1 ‡ KHy2 ˆ 0 , y1 ˆ ÿKHy2 , ÿKH
        nullspace N …p† ˆ C represented by R . The
y1 ÿKHy2 ÿKH ÿKH If
yˆ ˆ ˆ y2 2 R linear projection q represented by GF is a projection on
y2 y2 Ir Ir  
( I
  S represented by R along the nullspace
Ir JK
The columns of represent a basis for S, the
  JK   N …q† ˆ N …f † represented by N …F†.
0 I
columns of a basis for N …f †, the columns of r a We will now take a look into the special types of
Id 0 generalized inverses resulting by adding to the property
94
 
0
(G1) a combination from the properties (G2), (G3), all be orthogonal to the columns of spanning
Id
(G4).
N …f †. This can be expressed in the form
Lemma. For a re¯exive generalized inverse G of F (g of " #T  
f ) it holds that K ˆ JKH, i.e., Kÿ1 0
ˆ 0 , KÿT T T
r 0 ‡ J Id ˆ J ˆ 0
 ÿ1  J Id
K H
Gˆ …62†
J JKH ,Jˆ0 (

Proof. It follows directly from the property (G2): Remark. Upon looking into the explicit form of the
GFG ˆ G. ( matrices    
In the linear case where X and Y are inner-product I KH Ir 0
FG ˆ r and GF ˆ , it is easy to
linear spaces with distances determined by the norm, the 0 0 JK 0
minimum-distance generalized inverse is called the least- see that H ˆ 0 , …FG†T ˆ FG, while J ˆ 0 ,
squares generalized inverse and choosing x0 ˆ 0, the …GF†T ˆ GF. Thus we obtain the more familiar form of
0-nearest generalized inverse is called minimum- norm the properties (G3) and (G4) for least-squares and
generalized inverse. minimum-norm generalized inverses of matrices.

Lemma. For a least-squares generalized inverse it holds Corollary. A re¯exive least-squares inverse is represent-
that H ˆ 0, i.e., ed by
 ÿ1   ÿ1 
K 0 K 0
Gˆ …63† Gˆ …65†
J K J 0
a re¯exive minimum-norm inverse by
Proof. If G represents a least-squares inverse g then FG  ÿ1 
must represent an orthogonal projection p on R…f †. In K H
Gˆ …66†
other words, for any y 2 Y ; y^ ˆ p…y† represented by 0 0
^
y ˆ FGy is the closest element to y from R…f † if a least squares-minimum norm inverse by
y^ ÿ y ? R…f †, i.e., y^ ÿ y 2 R…f †? , where R…f †? is the  ÿ1 
orthogonal complement of R…f † in X . Since y^ ÿ y 2 C K 0
and C is a subspace complementary to R…f † it must hold Gˆ …67†
  0 K
that C ˆ R…f †? . ÿKH
Since C is represented by R it must hold and ®nally a re¯exive least squares-minimum norm
If
that inverse (pseudo inverse) by
 ÿ1 
 T K 0
ÿKH Gˆ …68†
Fx ˆ 0 for every x 0 0
If
 T   T (
ÿKH K 0 x1
, ˆ0 A generalized inverse G of the matrix F is speci®ed if the
If 0 0 x2 subspaces C  Y and S  X are known by means of
, ÿHT Kx1 ˆ 0 for every x1 , H ˆ 0 ( speci®c representations and in addition the action of g
on C is speci®ed. Alternatively, G is uniquely determined
once (in addition to C and S) we know the images under
Lemma. For a minimum-norm generalized inverse it g of the elements of a basis of C, which images span and
holds that J ˆ 0, i.e., thus determine both the action of g on C and the
 ÿ1  subspace D ˆ g…C†  N …f †  X .
K H Let C; S and D be represented in the original bases by
Gˆ …64†
0 K R…C0 †; R…S0 † and R…D0 † with D0 ˆ G0 C0 .
In the special bases introduced by the singular value
decomposition
  C is represented by the range R…C†
Proof. If G represents a minimum-norm inverse g then q C1
represented by GF must project every x0 2 Fy to an ˆR with jC2 j 6ˆ 0 (since dimC ˆ f ; C is n  f ,
C2
element x^ 2 Fy which has the minimum norm among all
elements of the same ®ber Fy . Since Fy ˆ x0 ‡ N …f † this so that C1 is r  f and C2 is f  f †, which means that
happens only when x^ ? N …f †. We have seen that q maps the f columns of C represent the elements of a basis for
elements of X on the section S, and in view of the same C.
dimension of these subspaces In the same special bases S is represented by the
 ÿ1it must hold that   
K S1
S ˆ N …f †? . The columns of span S and must range R…S† ˆ R with jS1 j 6ˆ 0 (since dimS ˆ r; S
J S2
95

is m  r, so that S1 is r  r and S2 is d  r), which means D1 ˆ UT1 D0 ˆ 0 …78†


that the r columns of S represent the elements of a basis
for S. D2  QC2 ˆ …K ÿ JKH†C2 …79†
The subspace D ˆ g…C† is represented by the matrix
and
D ˆ GC, which means that the f columns of D represent
a spanning set of D, which, however, is not a basis of D K ˆ Q ‡ JKH ˆ D2 Cÿ1
2 ‡ JKH
because dim D ˆ Dr < f in general.
In order to relate the two types of representations we ˆ UT2 D0 …VT2 C0 †ÿ1 …80†
may apply Eq. (48) to the columns of S; C, and D to ÿ1
ÿ UT2 S0 …UT1 S0 † Kÿ1 VT1 C0 …VT2 C0 † ÿ1
obtain
The upper zero submatrix D1 in D is a direct conse-
S0 ˆ US; C0 ˆ VC; D0 ˆ UD …69†
quence of the fact the D  N …f † where N …f † is
  
with inverse relations 0
represented by R . The same fact is expressed
Id
S ˆ U T S0 ; C ˆ VT C0 ; D ˆ UT D0 …70†
in the original bases by the condition UT1 D0 ˆ 0.
Setting
Remark. The matrix D is more than a representation of
U ˆ ‰U1 U2 Š; V ˆ ‰V1 V2 Š …71† the subspace D. This would be the case if the matrix T
was chosen arbitrarily. The choice T ˆ C2 makes D
where U1 is m  r, U2 is m  d, V1 is n  r and V2 is equal to GC and thus the columns of D are images under
n  f , we obtain g of the columns of C, i.e., of the speci®c given
description for C. Thus D not only describes the
S1 ˆ UT1 S0 ; S2 ˆ UT2 S0 …72† subspace D but it also describes the action of g on the
subspace C. Due to the fact that D  N …f † the relation
D1 ˆ UT1 D0 ; D2 ˆ UT2 D0 …73† D ˆ GC degenerates into D2 ˆ QC2 . Every column of C
is uniquely determined by the corresponding column of
C2 . Indeed if ei the i-th column of If then the i-th
C1 ˆ VT1 C0 ; C2 ˆ VT2 C0 …74† column of C is
 
The relations can be used to obtain G0 (the represen- ÿKH
Cei ˆ C2 ei and is determined by C2 ei , the i-th
tation of g in the original bases) in terms of the matrices If
S0 ; C0 ; D0 which represent the subspaces S; C; D; respec- column of C2 . Therefore C2 represents in a certain sense
tively, in the original bases. C and thus the subspace C. The matrix Q, which is
    uniquely related to K once H and J have been ®xed by
S1 I the choice of C and S, de®nes through the relation
Since R ˆR ; there exists an r  r non-
S2 JK D2 ˆ QC2 the action of g on C. (
     
S1 I I
singular matrix L such that ˆ Lˆ Thus the submatrices J; H and K which determine G
S2 JK JKL have now been related to the matrices C0 ; S0 , and D0
which implies that L ˆ S1 and which determine the subspaces C; S, and D, respectively,
J ˆ S2 Sÿ1 ÿ1
ˆ UT2 S0 …UT1 S0 †ÿ1 Kÿ1 as well as the action of g on C. The matrix G0 ˆ UGVT
1 K …75†
can now be expressed in terms of the matrices C0 ; S0 ,
    and D0 and the change of bases matrices U and V, which
C1 ÿKH also assume geometric representations.
Since R ˆR there exists an f  f
C2 If  
C1
non-singular matrix T such that Lemma. In the original basis R…f † is represented by
    C
ÿKH ÿKHT
2
R…V1 †; R…f †? by R…V2 †; N …f † by R…U2 † and N …f †? by
ˆ Tˆ which implies that T ˆ C2 R…U1 †.
If T
and Proof. Since V and U are orthogonal we obtain
ÿ1
H ˆ ÿKÿ1 C1 Cÿ1 ÿ1 T T
2 ˆ ÿK V1 C0 …V2 C0 † …76† I ˆ UT U )
In correspondence to the speci®c representation of C by UT1 U1 ˆ Ir ; UT2 U2 ˆ Id ; UT1 U2 ˆ 0; UT2 U1 ˆ 0
R…C† the subspace D ˆ g…C† is represented by R…D†
where the m  f matrix D is now given by Eq. (61) with I ˆ VT V )
T ˆ C2
      VT1 V1 ˆ Ir ; VT2 V2 ˆ Id ; VT1 V2 ˆ 0; VT2 V1 ˆ 0
0 0 0   
Dˆ C ˆ ˆ …77† Ir
K ÿ JKH 2 QC2 D2 R…f † is represented in the special basis by R , so
0
where that in the original basis it will be represented according
96
 
Ir
to Eq. (48) by the column range of U ˆ U1 . From The representation of a general generalized inverse
0
given by Eq. (81), despite its algebraic form and origin,
UT2 U1 ˆ 0, it follows that R…f †? is representedby R…U
 2 †. has a purely geometric character. All the submatrices
0 appearing in it have clear geometric meanings. A similar
N …f † is represented in the special basis by R , so
Id representation has been presented by Teunissen (1985,
that in the original basis it will be represented
  according Eq. 2.8), which in our notation can be written as
0
to Eq. (48) by the column range of V ˆ V2 . From G0 ˆ S0 ‰…C? T ÿ1 ? T
Id 0 † F0 S0 Š …C0 † …82†
T ?
V1 V2 ˆ 0, it follows that N …f † is represented by ? T ÿ1
‡ D0 ‰……F0 S0 † † C0 Š ……F0 S0 † † ? T

R…V1 †. ( where for a p  q matrix M with q < m, M? denotes a


p  …p ÿ q† matrix such that rank‰MjM? Š ˆ p and
Remark. Since the column space of a matrix remains the MT M? ˆ 0 (The column span of M? is an orthogonal
same when the matrix is multiplied from the left with a complement of the column span of M in Ep ). We prove
nonsingular matrix, we have the equally valid represen- the identity of the two representations of Eqs. (81) and
tations of R…f † by R…V1 K† and of N …f †? by R…U1 K†. ( (82) in Appendix B.
In the representation of Eq. (82) the matrices C? 0 and
Lemma. …F0 S0 †? are not uniquely de®ned, although G0 is. Our
(admittedly less elegant) representation by Eq. (81) gives
G0 ˆ S0 ‰…U1 K†T S0 Šÿ1 ‰VT1 ÿ VT1 C0 …VT2 C0 †ÿ1 VT2 Š a unique expression for G0 in terms of the chosen spe-
…81†
‡ D0 ‰VT2 C0 Šÿ1 VT2 ci®c descriptions of the subspaces C; S; D and the action
of g on C, as well as, in terms of the speci®c descriptions
of R…f †, N …f †, R…f †? and N …f †? associated with the
Proof. nonzero eigenvalues and the eigenvectors of the matrices
" #" # FT0 F0 and F0 FT0 . Of course the choice of descriptions for
T Kÿ1 H VT1
G0 ˆ UGV ˆ ‰U1 U2 Š the above subspaces is not unique.
J K VT2 It is easy to derive corresponding representation for
ˆ U1 K ÿ1
VT1 ‡ U1 HVT2 ‡ U2 JVT1 ‡ U2 KVT2 special types of generalized inverses. We can transform
the relevant representations Eqs. (56), (62)±(68) in terms
ˆ U1 Kÿ1 VT1 ÿ U1 Kÿ1 VT1 C0 …VT2 C0 †ÿ1 VT2 of the submatrices H, J, K to the original bases using
Eqs. (75), (76), (80). An alternative approach is to
‡ U2 UT2 S0 …UT1 S0 †ÿ1 Kÿ1 VT1 transform the restricting properties K ˆ JKH, H ˆ 0,
J ˆ 0, to the original bases using Eqs. (75), (76), (80) and
‡ U2 UT2 D0 …VT2 C0 †ÿ1 VT2
then apply the resulting conditions D0 ˆ 0, VT1 C0 ˆ 0,
ÿ U2 UT2 S0 …UT1 S0 †ÿ1 Kÿ1 VT1 C0 …VT2 C0 †ÿ1 VT2 UT2 S0 ˆ 0, respectively, on the general representation
Eq. (81). The results are:
ˆ U1 UT1 S0 …UT1 S0 †ÿ1 Kÿ1 VT1
Re¯exive generalized inverse:
ÿ U1 UT1 S0 …UT1 S0 †ÿ1 Kÿ1 VT1 C0 …VT2 C0 †ÿ1 VT2
G0 ˆ S0 ‰…U1 K†T S0 Šÿ1 VT1 ‰I ÿ C0 …VT2 C0 †ÿ1 VT2 Š …83†
‡ U2 UT2 S0 …UT1 S0 †ÿ1 Kÿ1 VT1
‡ U2 UT2 D0 …VT2 C0 †ÿ1 VT2 Least-squares generalized inverse:

ÿ U2 UT2 S0 …UT1 S0 †ÿ1 Kÿ1 VT1 C0 …VT2 C0 †ÿ1 VT2 G0 ˆ S0 ‰…U1 K†T S0 Šÿ1 VT1 ‡ D0 ‰VT2 C0 Šÿ1 VT2 Š …84†
 T
T U Minimum-norm generalized inverse:
which in view of I ˆ UU ˆ ‰U1 U2 Š 1T ˆ
U2
U1 UT1 ‡ U2 UT2 and the condition UT1 D0 ˆ 0 becomes G0 ˆ U1 …V1 K†ÿ1 ÿ S0 ‰…U1 K†T S0 Šÿ1 VT1 C0 …VT2 C0 †ÿ1 VT2
‡ D0 ‰VT2 C0 Šÿ1 VT2 …85†
G0 ˆ S0 …UT1 S0 †ÿ1 Kÿ1 VT1
ÿ S0 …UT1 S0 †ÿ1 Kÿ1 VT1 C0 …VT2 C0 †ÿ1 VT2 Re¯exive least-squares inverse:

‡ …I ÿ U1 UT1 †D0 …VT2 C0 †ÿ1 VT2 G0 ˆ S0 ‰…U1 K†T S0 Šÿ1 VT1 …86†

ˆ S0 …UT1 S0 †ÿ1 Kÿ1 VT1 Re¯exive-minimum norm inverse:

ÿ S0 …UT1 S0 †ÿ1 Kÿ1 VT1 C0 …VT2 C0 †ÿ1 VT2 G0 ˆ S0 ‰…U1 K†T S0 Šÿ1 ‰VT1 ÿ VT1 C0 …VT2 C0 †ÿ1 VT2 Š
‡ D0 …VT2 C0 †ÿ1 VT2 ‡ D0 ‰VT2 C0 Šÿ1 VT2 …87†

ˆ S0 ‰KUT1 S0 Šÿ1 ‰VT1 ÿ VT1 C0 …VT2 C0 †ÿ1 VT2 Š Least squares-minimum norm inverse:
‡ D0 ‰VT2 C0 Šÿ1 VT2 ( G0 ˆ U1 …V1 K†ÿ1 ‰I ÿ C0 …VT2 C0 †ÿ1 VT2 Š …88†
97

Pseudo-inverse: X oxi
…xi ÿ x0i †T
oh
G0 ˆ U1 Kÿ1 VT1 ˆ …U1 K†Kÿ1 …V1 K†ÿ1 …89† i
X …A7†
ˆ …kRzi ‡ t ÿ x0i †T …ÿkR‰zi ŠRT X† ˆ 0
All these representations involve the submatrices C0 , S0 , i
V1 , V2 , (V1 K) and (U1 K) which have a clear geometrical X
meaning. oxi X
…xi ÿ x0i †T ˆ …kRzi ‡ t ÿ x0i †T I ˆ 0 …A8†
i
ot i
Appendix A X oxi X
…xi ÿ x0i †T ˆ …kRzi ‡ t ÿ x0i †T …Rzi † ˆ 0
Derivation of the nonlinear Baarda transformations i
ok i
to the inner solution …A9†
We shall derive here the solution to the nonlinear which assuming that k 6ˆ 0 and jXj 6ˆ 0, can be rewritten
Baarda transformations, which are coordinate transfor- as
mations x ˆ S…z† from any minimum constraints solu- X X X
tion z to the x0 -nearest element x of the f -induced ®ber k ‰zi Šzi ‡ ‰zi ŠRT t ˆ ‰zi ŠRT x0i …A10†
i i i
Ff …z† . Here X X X
    kR zi ‡ tˆ x0i …A11†
xT ˆ xT1 xT2 . . . xTN ; xT0 ˆ xT01 xT02 . . . xT0N …A1† i i i
X X X
and in the most general case the three-dimensional k zTi zi ‡ zTi RT t ˆ zTi RT x0i …A12†
transformation sought can be expressed pointwise by i i i
the similarity transformation
  Taking into account that ‰zi Šzi ˆ 0 and setting
xi ˆ kR…h†zi ‡ t ˆ xi …zi ; p†; pT ˆ hT tT k …A2† 1X 1X
z zi ; x0  x0i …A13†
where the transformation parameters p consist of three N i N i
rotational parameters h ˆ ‰h1 h2 h3 ŠT de®ning the orthog- Eqs. (A10)±(A12) take the form
onal matrix R ˆ R…h†, three parallel displacement X
parameters t ˆ ‰t1 t2 t3 ŠT , and a scale parameter k. N ‰zŠRT t ˆ ‰zi ŠRT x0i …A14†
The problem is to ®nd the optimal values of h; t; k i
which minimize f ˆ f …p† ˆ …x ÿ x0 †T …x ÿ x0 †. We ®rst kRz ‡ t ˆ x0 …A15†
note that for X X
k zTi zi ‡ N zT RT t ˆ zTi RT x0i …A16†
o
R_ ˆ R : i i
oa Solving Eq. (A15) for t
T _ T ‡ RR_ T ˆ 0
RR ˆ I ) RR t ˆ x0 ÿ kRz …A17†
) RR _ T †T  ‰xŠ ) R_ ˆ ‰xŠR
_ T ˆ ÿ…RR and replacing in Eq. (A16) gives
P T T
This can be applied for the derivatives with respect to hk zi R …x0i ÿ x0 †
by setting oho k R ˆ ‰xk ŠR. i
kˆ P T …A18†
zi zi ÿ N zT z
oxi o i
ˆk Rzi ˆ k‰xk ŠRzi ˆ ÿk‰…Rzi †Šxk
ohk ohk …A3† which can be also written in the form
ˆ ÿkR‰zi ŠRT xk P
  …x0i ÿ x0 †T R…zi ÿ z†
oxi oxi oxi oxi i
kˆ P …A19†
ˆ …A4† …zi ÿ z†T …zi ÿ z†
oh oh1 oh2 oh3
i
ˆ ÿkR‰zi ŠRT ‰x1 x2 x3 Š  ÿkR‰zi ŠRT X
To determine R (in fact h) we replace t in Eq. (A14)
where X
N ‰zŠRT x0 ÿ kN ‰zŠz ˆ ‰zi ŠRT x0i …A20†
oxi oxi i
ˆ I; ˆ Rzi …A5†
ot ok which in view of ‰zŠz ˆ 0 can be written as
To minimize f we simply set 1X
  ‰zi ŠRT x0i ÿ ‰zŠRT x0
of of ox T ox ox ox
N i
ˆ ˆ 2…x ÿ x0 † ˆ0 …A6† …A21†
op ox op oh ot ok 1X
ˆ ‰x0i ŠRzi ÿ ‰x0 ŠRz ˆ 0
N i
or explicitly
98

or in the equivalent form With this particular choice Eq. (A28) takes the form
1X a cos # ˆ b sin # …A32†
‰…x0i ÿ x0 †ŠR…zi ÿ z† ˆ 0 …A22†
N i
where
Equation (A22) is a nonlinear equation which can be 1X
solved to obtain the values of the parameters h in any a …x0i ÿ x0 †T W…zi ÿ z†
N i
particular parametrization of the rotation matrix R…h†. …A33†
The obtained values should be next substituted in 1X
b …x0i ÿ x0 †T …zi ÿ z†
Eq. (A19) in order to determine the value of the scale N i
parameter k. The similarity transformation can be re-
alized using these values, once t and k are replaced from There are two solutions: cos # ˆ  p 6
; sin # ˆ
Eq. (A17) and (A19), respectively, into Eq. (A2) to ob- a2 ‡b2
tain  p
6
2
a ‡b 2
(both positive or both negative). Replacing in
P Eq. (A29) we obtain
…x0i ÿ x0 †T R…zi ÿ z†
i p
xi ˆ x0 ‡ P R…h†…zi ÿ z† …A23† b cos # ‡ a sin # a2 ‡ b2
…zi ÿ z†T …zi ÿ z† kˆ ˆ  …A34†
i s2z s2z
In the case of the rigid transformation xi ˆ R…h†zi ‡ t we where
obtain only Eds. (A7), (A8) with k ˆ 1 and following the 1X
same procedure as before we arrive at the solution s2z  …zi ÿ z†T …zi ÿ z† …A35†
N i
1X
‰…x0i ÿ x0 †ŠR…zi ÿ z† ˆ 0 …A24† With the obtained values of sin #; cos # and k, the planar
N i
similarity transformation of Eq. (A30) becomes
xi ˆ x0 ‡ R…h†…zi ÿ z† …A25†  
1 b a
xi ˆ x0 ‡ 2 …zi ÿ z† …A36†
The planar (two-dimensional) case is somewhat dif- sz ÿa b
ferent since the 2  2 rotation matrix R…#† depends only
on a single parameter #. Setting In the case of the planar rigid transformation, the
solution is given by
o _ T
W RRT ˆ RR …A26† 1X
o# …x0i ÿ x†T WR…zi ÿ z† ˆ 0 …A37†
N i
where W is a 2  2 antisymmetric matrix, the only
di€erence from the three-dimensional similarity trans-
formation case (apart from the fact that the vectors xi ˆ x0 ‡ R…#†…zi ÿ z† …A38†
xi ; x0i ; zi ; x0 ; z involved are now two-dimensional) is that
Eq. (A7) is replaced by and for the above speci®c parametrization of R…#† we
obtain two solutions
X oxi X  
…xi ÿ x0i †T ˆ …kRzi ‡ t ÿ x0i †T …kWRzi † ˆ 0 1 b a
i
o# i xi ˆ x0  p …zi ÿ z† …A39†
a2 ‡ b2 ÿa b
…A27†
which correspond to two rotation angles # and # ‡ p. A
and the ®nal solution for the planar similarity transfor- further investigation is needed in each particular appli-
mation takes the form cation, in order to determine which of the two solutions
1X minimizes in fact the target function f ˆ
…x0i ÿ x0 †T WR…zi ÿ z† ˆ 0 …A28† …x ÿ x0 †T …x ÿ x0 †:
N i
P
…x0i ÿ x0 †T R…zi ÿ z†
i Appendix B
kˆ P …A29†
…zi ÿ z†T …zi ÿ z†
i Proof of the equivalence of the representations
of Eqs. (81) and (82)
xi ˆ x0 ‡ kR…#†…zi ÿ z† …A30†
Our starting point is Eq. (81)
We can proceed further with the solution by choosing
the usual parametrization
    G0 ˆ S0 ‰KUT1 S0 Šÿ1 ‰VT1 ÿ VT1 C0 …VT2 C0 †ÿ1 VT2 Š
cos # sin # 0 1 (B1)
R…#† ˆ )Wˆ …A31†
ÿ sin # cos # ÿ1 0 ‡ D0 ‰VT2 C0 Šÿ1 VT2
99

We need an interpretation of this formula which will be With the obvious simplest choice B ˆ Id we obtain
independent of U1 ; U2 ; V1 ; V2 and depends only on  
S0 ; C0 ; D0 , and related matrices. ? 0
…FS† ˆ : (
Let us consider in addition to the matrix C with Id
column span R…C† a matrix C? with column span
R…C†? ˆ R…C†? . This means that C? must be such that Returning to the original bases we have

rank‰CjC? Š ˆ n and CT C? ˆ 0 0 ˆ …FS†T …FS†? ˆ …VT F0 UUT S0 †T …FS†?


ˆ …F0 S0 †T V…FS†?
?
Lemma. A possible choice for C is
  implying that a possible choice for …F0 S0 †? is
I  
C? ˆ (B2)
ÿCÿT
2 C1
T
…F0 S0 †? ˆ V…FS†? ˆ ‰V1 V2 Š
0
ˆ V2
Id


? A
Proof. Setting C ˆ we have Using the derived relations
 B
  A ÿT T
CT C? ˆ CT1 ; CT2 ˆ CT1 A ‡ CT2 B ˆ 0 C?
0 ˆ V1 ÿ V2 …V2 C0 † C0 V1 …B4†
B
yielding B ˆ ÿCÿT T
2 C1 A while A must be chosen so that …F0 S0 †? ˆ V2 …B5†
 
? C1 A and
rank‰CjC Š ˆ rank
C2 ÿCÿT T
2 C1 A
  …F0 S0 † ˆ VFUT US ˆ V…FS† …B6†
A C1
ˆ rank ˆn we obtain
ÿCÿT T
2 C1 A C2  
T KS1
With the obvious simplest choice A ˆ Ir we obtain …C?
0 † F0 S0 ˆ ‰VT1 ÿ VT1 C0 …VT2 C0 †ÿ1 VT2 Š‰V1 V2 Š
  0
I ˆ ‰VT1 ÿ VT1 C0 …VT2 C0 †ÿ1 VT2 ŠV1 KS1
C? ˆ T : (
ÿCÿT
2 C1
ˆ VT1 V1 KS1
Returning to the original bases we have 0 ˆ CT C?
ÿ VT1 C0 …VT2 C0 †ÿ1 VT2 V1 KS1
ˆ CT0 VC? ; implying that
ÿT
ˆ KS1 ˆ KUT1 S0 (B7)
C?
0
?
ˆ VC ˆ V1 ÿ V2 CÿT T
2 C1 ˆ V1 ÿ V2 …V2 C0 † CT0 V1
where we have used the fact that VT1 V1 ˆ I and
The range R…f † is represented by R…F† ˆ R…FS† since VT2 V1 ˆ 0 as a consequence of VT V ˆ I. Utilizing the
f …S† ˆ R…f † and derived relation KUT1 S0 ˆ …C? T
0 † F0 S0 and the one for
     ? T
…C0 † in Eq. (B4) the generalized inverse g is represent-
K 0 S1 KS1
FS ˆ ˆ ed in the original bases by the matrix
0 0 S2 0
T ÿ1 ? T
G0 ˆ S0 ‰…C?
0 † F0 S0 Š …C0 †
Lemma. A possible choice for …FS† is ? (B8)
‡ D0 ‰……F0 S0 †? †T C0 Šÿ1 ……F0 S0 †? †T
 
? 0
…FS† ˆ …B3† This equation is identical with the representation of
Id Eq. (82) derived by Teunissen (1985, Eq. 2.7).


? A
Proof. Setting …FS† ˆ for any matrix with
B References
?
column-span R…FS† we have
  Baarda W (1973) S-Transformations and criterion matrices.
  A Netherlands Geodetic Commission, Publications on Geodesy,
0 ˆ …FS†T …FS†? ˆ ST1 K 0 ˆ ST1 K A ˆ 0
B New Series, Vol. 5, No. 1
Bjerhammar A (1951) Rectangular reciprocal matrices with special
which yields A ˆ 0, while B must be chosen so that references to geodetic calculations. Bull Geod 52: 188±220
  Blaha G (1971) Inner adjustment constraints with emphasis on
h i KS1 0 range observations. Dept Geodetic Science Report No. 148, The
rank …FS†j…FS†? ˆ rank Ohio State University
A B Choquet-Bruhat Y, DeWitt-Morette C, Dillard-Bleick M (1977)
 
KS1 0 Analysis manifolds and physics. North-Holland, Amsterdam
ˆ rank ˆ m: Dermanis A (1991) A uni®ed approach to linear estimation and
A B prediction. Presented at the IUGG XXth General Assembly,
100

International Association of Geodesy, Vienna, August 11±24, Meissl P (1969) Zusammengfassung und Ausbau der inneren
1991 Fehlertheorie eines Punkthaufens. Deutsche Geod Komm A
Dermanis A, Grafarend E (1981) Estimability analysis of geodetic, 61: 8±21
astrometric and geodynamical quantities in Very Long Baseline Mierlo, J van (1980) Free network adjustment and S-transforma-
Interferometry. Geophys J R Astronom Soc 64: 31±56 tions. Deutsche Geod Komm B 252: 41±54
Dermanis A, Sanso F (1995) Nonlinear estimation problems for Moore EH (1920) On the reciprocal of the general algebraic matrix
nonlinear models. Manuscr Geod 20: 110±122 (abstract). Bull Amer Math Soc 26: 394±395
Grafarend E (1974) Optimization of geodetic networks. Boll Geod Penrose R (1955): A generalized inverse for matrices. Proc Cam-
Sci Ani 33: 351±406 bridge Philos Soc 52: 17±19
Grafarend E, Kampmann G (1996) C10 …3†: the ten-parameter Rao CR, Mitra SK (1971) Generalized inverse of matrices and its
conformal group as a datum transformation in three-dimen- applications. Wiley, New York
sional Euclidean space. Z Vermessungswesen 121: 68±77 Scha€rin B, Heidenreich E, Grafarend E (1977) A representation
Grafarend E, Scha€rin B (1974) Unbiased free net adjustment. of the standard gravity ®eld. Manuscr Geod 2: 135±174
Surv Rev XXII, 171: 200±218 Sternberg S (1964) Lectures on di€erential geometry. Prentice-Hall,
Grafarend E, Scha€rin B (1976) Equivalence of estimable quanti- Englewood Cli€s, N.J.
ties and invariants in geodetic networks. Z Vermessungswesen Takos I (1976) Generalized inverse of matrices and its applications
101: 485±491 (in Greek). Bull Hellenic Army Geogr Service 1976: 77±133
Grafarend E, Scha€rin B (1991) The planar trisection problem and Teunissen PJG (1985) The geometry of geodetic inverse linear
the impact of curvature on non-linear least squares estimation. mapping and non-linear adjustment. Netherlands Geodetic
Comput Stat Data Anal 12: 187±199 Commission, Publications on Geodesy, New Series, Vol. 8,
Halmos PR (1974) Finite-dimensional vector spaces. Springer, No. 1
Berlin Heidelberg New York Teunissen PJG (1989a) Nonlinear inversion of geodetic and geo-
Koch K-R (1982) S-transformations and projections for obtaining physical data: diagnosing nonlinearity. In: Brunner FK, Rizos
estimable parameters. In: Blotwijk, M.J., F.J.J. Brouwer, D.T. C (eds) Developments in four-dimensional geodesy. Lecture
van Daalen, H.M. de Heus, J.T. Gravesteijn, H.C. van der Notes in Earth Sciences, No. 29. Springer, Berlin Heidelberg
Hoek, J.J. Kok, P.J.G. Teunissen (eds) 40 Years of Thought, New York pp 241±264
Anniversary volume for Prof. Baarda's 65th Birthday Vol. 1, Teunissen PJG (1989b) First and second moments of non-linear
Technische Hogeschool Delft, Delft, pp 136±144 least-squares estimators. Bull Geod 63: 253±262
Lohse P (1994) Ausgleichungsrechnung in nichtlinearen modellen. Teunissen PJG (1990) Non-linear least squares. Manuscr Geod 15:
Deutsche Geod Komm C 429 137±150
Loomis LH, Sternberg S (1980) Advanced calculus. Addison- Xu P (1995) Testing the hypotheses of non-estimable functions in
Wesley, Reading Mass free net adjustment models. Manuscr Geod 20: 73±81
Meissl P (1965) UÈber die Innere Genauigkeit dreidimensionaler
Punkthaufen. Z Vermessungswesen 90: 109±118

Das könnte Ihnen auch gefallen