Sie sind auf Seite 1von 5

Introduction[edit]

Statement of convention[edit]

According to this convention, when an index variable appears twice in a single term and is not otherwise
defined (see free and bound variables), it implies summation of that term over all the values of the
index. So where the indices can range over the set {1, 2, 3},

{\displaystyle y=\sum _{i=1}^{3}c_{i}x^{i}=c_{1}x^{1}+c_{2}x^{2}+c_{3}x^{3}}

is simplified by the convention to:

{\displaystyle y=c_{i}x^{i}} .

The upper indices are not exponents but are indices of coordinates, coefficients or basis vectors. That is,
in this context x2 should be understood as the second component of x rather than the square of x (this
can occasionally lead to ambiguity). The upper index position in xi is because, typically, an index occurs
once in an upper (superscript) and once in a lower (subscript) position in a term (see 'Application'
below). And typically (x1, x2, x3) would be equivalent to the traditional (x, y, z).

In general relativity, a common convention is that

the Greek alphabet is used for space and time components, where indices take on values 0, 1, 2, or 3
(frequently used letters are μ, ν, ...),

the Latin alphabet is used for spatial components only, where indices take on values 1, 2, or 3
(frequently used letters are i, j, ...),

In general, indices can range over any indexing set, including an infinite set. This should not be confused
with a typographically similar convention used to distinguish between tensor index notation and the
closely related but distinct basis-independent abstract index notation.

An index that is summed over is a summation index, in this case "i". It is also called a dummy index since
any symbol can replace "i" without changing the meaning of the expression provided that it does not
collide with index symbols in the same term.

An index that is not summed over is a free index and should appear only once per term. If such an index
does appear, it usually also appears in terms belonging to the same sum, with the exception of special
values such as zero.

Application[edit]

Einstein notation can be applied in slightly different ways. Typically, each index occurs once in an upper
(superscript) and once in a lower (subscript) position in a term; however, the convention can be applied
more generally to any repeated indices within a term.[2] When dealing with covariant and
contravariant vectors, where the position of an index also indicates the type of vector, the first case
usually applies; a covariant vector can only be contracted with a contravariant vector, corresponding to
summation of the products of coefficients. On the other hand, when there is a fixed coordinate basis (or
when not considering coordinate vectors), one may choose to use only subscripts; see below.
Vector representations

Superscripts and subscripts versus only subscripts

In terms of covariance and contravariance of vectors,

upper indices represent components of contravariant vectors (vectors),

lower indices represent components of covariant vectors (covectors).

They transform contravariantly or covariantly, respectively, with respect to change of basis.

In recognition of this fact, the following notation uses the same symbol both for a (co)vector and
its components, as in:

{\displaystyle v=v^{i}e_{i}={\begin{bmatrix}e_{1}&e_{2}&\cdots
&e_{n}\end{bmatrix}}{\begin{bmatrix}v^{1}\\v^{2}\\\vdots \\v^{n}\end{bmatrix}}\qquad
w=w_{i}e^{i}={\begin{bmatrix}w_{1}&w_{2}&\cdots

&w_{n}\end{bmatrix}}{\begin{bmatrix}e^{1}\\e^{2}\\\vdots \\e^{n}\end{bmatrix}}}

where v is the vector and vi are its components (not the ith covector v), w is the covector and wi are its
components.

In the presence of a non-degenerate form (an isomorphism V → V*, for instance a Riemannian
metric or Minkowski metric), one can raise and lower indices.

A basis gives such a form (via the dual basis), hence when working on ℝn with a Euclidean metric and a
fixed orthonormal basis, one has the option to work with only subscripts.

However, if one changes coordinates, the way that coefficients change depends on the variance of the
object, and one cannot ignore the distinction; see covariance and contravariance of vectors.

Mnemonics

In the above example, vectors are represented as n × 1 matrices (column vectors), while covectors are
represented as 1 × n matrices (row covectors).

When using the column vector convention

"Upper indices go up to down; lower indices go left to right."

"Covariant tensors are row vectors that have indices that are below (co-below-row)."

Covectors are row vectors:

{\displaystyle {\begin{bmatrix}v_{1}&\cdots &v_{k}\end{bmatrix}}.}

Hence the lower index indicates which column you are in.

Contravariant vectors are column vectors:


{\displaystyle {\begin{bmatrix}w^{1}\\\vdots \\w^{k}\end{bmatrix}}}

Hence the upper index indicates which row you are in.

Abstract description

The virtue of Einstein notation is that it represents the invariant quantities with a simple notation.

In physics, a scalar is invariant under transformations of basis. In particular, a Lorentz scalar is invariant
under a Lorentz transformation. The individual terms in the sum are not. When the basis is changed,
the components of a vector change by a linear transformation described by a matrix. This led Einstein to
propose the convention that repeated indices imply the summation is to be done.

As for covectors, they change by the inverse matrix. This is designed to guarantee that the linear
function associated with the covector, the sum above, is the same no matter what the basis is.

The value of the Einstein convention is that it applies to other vector spaces built from V using
the tensor product and duality. For example, V ⊗ V, the tensor product of V with itself, has a basis
consisting of tensors of the form eij = ei ⊗ ej. Any tensor T in V ⊗ V can be written as:

{\displaystyle \mathbf {T} =T^{ij}\mathbf {e} _{ij}} .

V*, the dual of V, has a basis e1, e2, ..., en which obeys the rule

{\displaystyle \mathbf {e} ^{i}(\mathbf {e} _{j})=\delta _{j}^{i}.}

where δ is the Kronecker delta. As

{\displaystyle \operatorname {Hom} (V,W)=V^{*}\otimes W}

the row/column coordinates on a matrix correspond to the upper/lower indices on the tensor product.

Common operations in this notation

In Einstein notation, the usual element reference Amn for the mth row and nth column of
matrix A becomes Amn. We can then write the following operations in Einstein notation as follows.

Inner product (hence also vector dot product)

Using an orthogonal basis, the inner product is the sum of corresponding components multiplied
together:

{\displaystyle \mathbf {u} \cdot \mathbf {v} =u^{j}v_{j}}

This can also be calculated by multiplying the covector on the vector.


Vector cross product

Again using an orthogonal basis (in 3 dimensions) the cross product intrinsically involves summations
over permutations of components:

{\displaystyle \mathbf {u} \times \mathbf {v} =\varepsilon ^{i}{}_{jk}u^{j}v^{k}\mathbf {e} _{i}}

where

{\displaystyle \varepsilon ^{i}{}_{jk}=\delta ^{il}\varepsilon _{ljk}}εijk is the Levi-Civita symbol, and δil is
the generalized Kronecker delta. Based on this definition of ε, there is no difference
between εijk and εijk but the position of indices.

Matrix-Vector Multiplication

The product of a matrix Aij with a column vector vj is :

{\displaystyle \mathbf {u} _{i}=(\mathbf {A} \mathbf {v} )_{i}=\sum _{j=1}^{N}A_{ij}v_{j}}equivalent to

{\displaystyle u^{i}=A^{i}{}_{j}v^{j}}

This is a special case of matrix multiplication.

Matrix multiplication

The matrix product of two matrices Aij and Bjk is:

{\displaystyle \mathbf {C} _{ik}=(\mathbf {A} \mathbf {B} )_{ik}=\sum _{j=1}^{N}A_{ij}B_{jk}}

equivalent to

{\displaystyle C^{i}{}_{k}=A^{i}{}_{j}B^{j}{}_{k}}

Trace

For a square matrix Aij, the trace is the sum of the diagonal elements, hence the sum over a common
index Aii.

Outer product

The outer product of the column vector ui by the row vector vj yields an m × n matrix A:

{\displaystyle A^{i}{}_{j}=u^{i}v_{j}=(uv)^{i}{}_{j}}

Since i and j represent two different indices, there is no summation and the indices are not eliminated
by the multiplication.

Raising and lowering indices


Given a tensor, one can raise an index or lower an index by contracting the tensor with the metric
tensor, gμν. For example, take the tensor Tαβ, one can raise an index:

{\displaystyle T^{\mu \alpha }=g^{\mu \sigma }T_{\sigma }{}^{\alpha }}

Or one can lower an index:

{\displaystyle T_{\mu \beta }=g_{\mu \sigma }T^{\sigma }{}_{\beta }}

Das könnte Ihnen auch gefallen