Sie sind auf Seite 1von 33

A Proposal of an Algebra for Vectors and an

Application to Electromagnetism
Diego Saa

Abstract. A new mathematical structure intended to formalize the classical 3D and 4D vectors is briefly described. This structure is offered to
the investigators as a tool that bears the potential of being more appropriate, for its use in Physics and science in general, than any of the
other mathematical structures of geometric origin, such as the Hamilton
(or Pauli or Dirac) quaternions, geometric algebra (GA) and space-time
algebra (STA). The application of this algebra in electromagnetism is
demonstrated, where current concepts are reproduced, in some cases,
and modified, in other cases. Several physical variables are proved to
satisfy the wave equation. It is suggested the need of an electromagnetic
field scalar, with which Maxwells equations are derived as the result
of a simple four-vector product. As a byproduct, new values and units
for the dielectric permittivity and magnetic permeability of vacuum are
proposed.
Mathematics Subject Classification (2010). 02.10.De, 03.50.De, 06.20.fa.
Keywords. four-vectors, quaternions, four-vector derivative, electromagnetic theory.

Diego Sa
a

Contents
1. Introduction
2. Four-vectors
2.1. Advantages of four-vectors
2.2. Complex four-vectors
3. Four-vector algebra
3.1. Sum, difference and conjugates
3.2. Four-vector product
3.3. The Magnitude (Absolute Value) and the Norm
3.4. Unit four-vector
3.5. Identity four-vector
3.6. Multiplicative inverse
4. Four-vector calculus
4.1. Derivatives
4.2. Differential of interval
4.3. Four-Velocity
4.4. Four-gradient
5. Four-vectors in electromagnetism
5.1. Maxwells equations from electromagnetic four-vector
5.2. Electromagnetic four-vector from potential four-vector
5.3. Charge-current four-vector
5.4. Generalized charge-current continuity equations
5.5. Some electromagnetism in classical Physics
5.6. Solution of wave equation
5.7. Covariance of physical laws
5.8. Electromagnetic forces
6. Discussion
Acknowledgment
References

3
8
10
10
11
11
12
13
13
13
14
14
15
16
16
16
17
18
22
24
25
25
26
27
27
29
30
31

Algebra for Vectors and an Application

1. Introduction
Four-vectors are regarded as the most proper mathematical structure for the
handling of the pervasive four-dimensional variables identified in the Physics
of the twentieth-century.
In this paper, a new product is defined for four-vectors, which implies a new
algebra for the handling of four-vectors. It is a new non-associative algebra
that embraces vectors of up to four dimensions that can be extended without undue effort to further dimensions. If you are familiarized with vectors,
you should find it very easy to work with this new mathematical structure,
because it is a rather obvious formalization of vectors.
Nevertheless, to the knowledge of the present writer, this algebraic structure
has not been discovered before, despite the utmost and acknowledged importance of vectors. The scalar and vector products (or dot and cross products)
have not been defined and operated before with a single integrated and coherent vector structure comparable to the one proposed here.
By endowing vectors with the new product, vectors acquire more properties and extend their use for the easier and correct handling of the fourdimensional physical variables. The new four-vector product reveals both the
classical dot and cross products, when the first element of their operands,
called the scalar in quaternionic terminology, is zero. This will be explained
later.
The author had to decide whether the new mathematical structure should
be given a new name, or to maintain the classical one, despite the fact that
a new mathematical form for the product is attributed to them.
The decision of this author has been to preserve the name for the structure,
which also preserves the functionality of three-dimensional vectors.
The reader will be able to judge that this decision is justified by the fact
that the remaining operations and interpretations continue being the same
as what he is used to call simply four-vectors or vectors.
In the present paper, the terms vector product, four-vector product or
simply product will be used to refer to the new product. The classical dot
and cross products will be named as such. Later it will be shown that the
tensor product of a covariant by its corresponding contravariant four-vector
can be reproduced by carrying out the product of the four-vector by itself,
since no distinction is found between the covariant and contravariant forms
of a four-vector in orthonormal coordinates.
The rest of this section is essentially a recount of vector history, so the
bored reader might wish to skim to the next section to confront our new
proposal.
The most similar, to four-vectors, mathematical structure proposed to represent the physical variables has been the quaternions.
Quaternions were known by Gauss by 1819 or 1820, but unpublished. Their
official discovery is attributed to the Irish mathematician William Rowan
Hamilton in 1843 and they have been used for the study of several areas of

Diego Sa
a

Physics, such as mechanics, electromagnetism, rotations and relativity [1],


[2], [3], [4], [5], [6]. James Clerk Maxwell used the quaternion calculus in his
Treatise on Electricity and Magnetism, published in 1873 [7]. An extensive
bibliography of more than one thousand references about Quaternions in
mathematical physics has been compiled by Gsponer and Hurni [8].
The Americans Gibbs and Heaviside discovered the modern vectors between 1888 and 1894. Their work may be considered a sort of combination
of quaternions and ideas developed around 1840 by the German physicist
Hermann Grassman. The notation was primarily borrowed from quaternions
but the geometric interpretation was borrowed from Grassmans system.
Hamilton and his followers, such as Tait, considered quaternions as a mathematical structure with great potential to represent physical variables. Nevertheless, they have not lived up to the expectations of physicists.
By the end of the nineteenth century both mathematicians and physicists were having difficulty applying quaternions to Physics.
The authors of reference [9] explain that quaternions constituted an intermediate step between a plane geometric calculus (represented by the complex
numbers) and the contemporary vector analysis. They allowed to simplify
the writing of a problem and, under certain conditions, allowed the geometric interpretation of the problem. Their multiplication has two physically
meaningful products, but, supposedly, the presence of two parts in the same
number complicated the direct handling of the quaternions.
Maxwell, Heaviside, Gibbs and others noted the problems with the quaternions and began a heated debate, with Tait and other advocates of quaternions, which by 1894 had largely been settled in favor of modern vectors.
Gibbs was acutely aware that quaternionic methods contained the most important pieces of his vector methods. However, in the preface of Gibbs book,
Dr. Edwin Wilson affirms that Notwithstanding the efforts which have been
made during more than a half century to introduce Quaternions into physics
the fact remains that they have not found wide favor. [10]
Crowe comments that Maxwell in general disliked quaternion methods
(as opposed to quaternion ideas); thus for example he was troubled by the
non-homogeneity of the quaternion or full vector product and by the fact
that the square of a vector was negative which in the case of velocity vector made the kinetic energy negative. The aspects that Maxwell liked were
clearly brought in his great work on electricity; the aspects he did not like
were indicated only by the fact that Maxwell did not include them. [11]
Heaviside was aware of the several difficulties caused by quaternions. He
wrote, for example, Another difficulty is in the scalar product of Quaternions being always the negative of the quantity practically concerned. Yet
another is the unreal nature of quaternionic formulae [12]. The difficulty
was a purely pragmatic one, which Heaviside expressed saying that there

Algebra for Vectors and an Application

is much more thinking to be done [to set up quaternion equations], for the
mind has to do what in scalar algebra is done almost mechanically [12].
There is great advantage in most practical work in ignoring the quaternion
altogether, and also the double signification of a vector above referred to,
and in abolishing the quaternionic minus sign. (Heavisides emphasis) [12].
In principle, most everything done with the new system of vectors could be
done with quaternions, but the operations required to make quaternions behave like vectors added difficulty to using them and provided little benefit to
the physicist. Precisely Crowe quotes the following paragraph attributed to
Heaviside: But on proceeding to apply quaternionics to the development of
electrical theory, I found it very inconvenient. Quaternionics was in its vectorial aspects antiphysical and unnatural, and did not harmonise with common
scalar mathematics. So I dropped out the quaternion altogether, and kept to
pure scalars and vectors, using a very simple vectorial algebra in my papers
from 1883 onward. [11]
Alexander MacFarlane was one of the debaters and seems to have been
another of the few in realizing what the real problem with the quaternions
was. MacFarlanes attitude was intermediate - between the position of the
defenders of the GibbsHeaviside system and that of the quaternionists. He
supported the use of the complete quaternionic product of two vectors, but
he accepted that the scalar part of this product should have a positive sign.
According to MacFarlane the equation j k = i was a convention that should
be interpreted in a geometrical way, but he did not accept that it implied the
negative sign of the scalar product. [13] (The emphases are mine).
He incorrectly attributed the problem to a secondary and superficial matter of representation of symbols, instead of blaming to the more profound
definition of the quaternion product. MacFarlane credited the controversy
concerning the sign of the scalar product to the conceptual mixture done
by Hamilton and Tait. He made clear that the negative sign came from the
use of the same symbol to represent both a quadrantal versor and a unitary
vector. His view was that different symbols should be used to represent those
different entities. [13] (The emphasis is mine).
At the beginning of the twentieth century, Physics in general, and relativity theory in particular, was lacking the appropriate mathematical formalism to represent the new physical quantities that were being discovered.
But, despite the fact that it was recognized that all physical variables such as
space-time points, velocities, potentials, currents, etc., must be represented
with four values, quaternions were not used to represent and manipulate
them. It was necessary to develop some new mathematical tools in order
to manipulate such variables. Besides vectors, other systems such as tensors,
spinors and matrices were developed or used to handle the physical variables.
In the course of the twentieth century we have witnessed further efforts to
overcome the remaining difficulties, with the development of other algebras,

Diego Sa
a

which recast several of the ideas of Grassman, Hamilton and Clifford in a


slightly different framework. Examples in this direction are Hestenes Geometric Algebra in three dimensions and Space Time Algebra in four dimensions. [14], [15], [16], [17] [18]
The commutativity of the product was abandoned in all the previous
quaternions and in some algebras, such as the one of Clifford. According to
Gaston Casanova [19] It was the English Clifford who carried out the decisive path of abandoning all the commutativity for the vectors but conserving
their associativity. [19]. Also the Hestenes geometric product conserves
associativity [18]. In this sense, the associativity of the product is finally abandoned in the four-vector product defined later in the present paper. This is a
collateral effect of the proposed algebra, and constitutes a hint about the form
the new four-vectors handle, for example, a sequence of rotations. Besides,
the complex numbers are not handled as in the Hamilton quaternions, where
the real number is situated in the scalar part and the imaginary number in
the vector part. Rather, four-vectors allow that a whole complex number be
placed in each component, so it is possible to have up to four complex numbers. But, what is more important, it is known that in quantum mechanics,
observables do not form an associative algebra, so the present one seems to
be the natural algebra for Physics.
Our intent is to raise the interest in this algebra and try to convince the
reader that the presented here is one of the most important mathematical
tools for Physics.
Paraphrasing Martin Erik Horn [2] about quaternions, Having important consequences for the learning process, the analysis of four-vector representations of other relativistic relationships should be a further theme of
physics education research. . . Due to its structural density, the four-vector
representation is without a doubt a more unified theory in comparison to the
matrix representation.
I would add that the use of four-vectors allows discerning constants,
variables and relations, previously unknown to Physics, which are needed to
complete and make coherent the theory.
In summary, it has been an old dream to express the laws of Physics
with the use of quaternions. But this attempt has been plagued with recurring
pitfalls for reasons until now unknown to both physicists and mathematicians.
Quaternions have not been making problem solving easier or simplifying the
equations.
I believe that this has been due to an internal problem in the definition of the product of the Hamilton quaternions. With the vector algebra
proposed in a previous paper [20] and briefly revised here, the author hopes

Algebra for Vectors and an Application

that the interest and use will reverse in favor of four-vectors, instead of the
Hamilton, Pauli or Dirac quaternions, tensors, geometric algebra, spacetime
algebra and other formalisms.

Despite the fact that the original developers of vector theory had identified the difficulties, it is a fact that, after more than one hundred years of
its inception, vector theory has not yet been endowed with the needed fourvector product, comparable in characteristics to the one of quaternions. This
deficiency is overcome in the present paper.

In the present paper, the synthesis of all the Maxwell equations which
are equivalent to a simple four-vector product is performed through the derivative (four-gradient) of a new electromagnetic four-vector.
This is the application of four-vectors to electromagnetism studied in this
paper. It consists in reproducing some known formulas, in particular the four
Maxwell equations, by taking a single four-vector product and in developing other expressions that describe the interaction between charges, currents,
potentials and electromagnetic fields. Relevant examples are the derivations
that show several physical variables satisfying the homogeneous wave equation. In particular, the potentials and the charge-current four-vectors satisfy
corresponding dAlembert equations.

Several derivations, mainly based on classical vector algebra, have been


abridged in order to maintain the length of this paper within reasonable
limits. The reader should have no problem in reproducing them with the
suggestions provided.

Take attention of number 3 in section 3.2, where an example of the


non-associative nature of the classical vector cross product is exposed, via an
example.
This is well-known, but most of the mathematical tools used in Physics usually handle only associative products. Therefore, the non-associative mathematical structure here proposed could be more appropriate for the handling
of vectors.

The investigators might wish to delve into some possible applications


for evaluating the real potential of this structure and to help to further its
development. The present author suggests Euclidean, projective and conformal geometries and, within Physics, it would be interesting to explore the
Lorentz invariance of Maxwells equations, as well as applications in classical
Mechanics and even in Relativity, Quantum Mechanics and Diracs theory.

Diego Sa
a

2. Four-vectors
The present author, in a former paper [20] proposed the mathematical structure used in the present paper. A revision of the basic algebra is performed
in this and following sections, in order to maintain this paper self-contained.
The proposal is that four-vectors are four-dimensional numbers of the
form:

A = e at + i ax + j ay + k az

(1)

or, assuming that the order of the basis elements e, i, j and k is the
indicated, then those basis elements can be suppressed and included implicitly
in a notation similar to a vector or 4D point:
A = (at , ax , ay , az )

(2)

Four-vectors will be denoted in general with a bold upper-case letter. Threevectors will be denoted in general with bold lower-case letters.
The t, x, y and z, as sub- or super-indexes of the elements, should be interpreted as the space-time coordinate associated to the respective element of
the four-vector.
The classical three-dimensional vectors are represented by just the three
spatial elements of a four-vector. We will represent them also by using the
abbreviated form from expression 2, namely with comma-separated elements
and implicit basis elements i, j and k.
Physicists are accustomed to referring to the first element of a four-vector as
the scalar, and to the remaining three elements as the vector part of the
four-vector. For example, the electromagnetic four-potential is conceived of
as constituted of the scalar potential and the vector potential [21]. We
will also follow such terminology.
Four-vector elements can be any integer, real, imaginary or complex
numbers.
The four basis elements e, i, j and k satisfy the following relations.These
relations define the four-vector product. For simplicity, the operator for the
product will not be shown in print, it is represented implicitly by the space between the pair of four-vectors to be multiplied, and must be assumed present
whenever two four-vectors are separated by a space. The square represents
the product of the element by itself:
e2 = i2 = j2 = k2 = e

(3)

Algebra for Vectors and an Application

Also, the following rules are satisfied by the basis elements:


e i = i e = i,

(4)

e j = j e = j,

(5)

e k = k e = k,

(6)

i j = j i = k,

(7)

j k = k j = i,

(8)

k i = i k = j.

(9)

The relations 3 to 9 give an important operational mechanism to reduce


any combination of two or more indexes to just one.
We will usually make use of this algebra. However, there exists a second,
or alternative algebra, where the right-hand side values of the first three
relations 4-6 are positive. That is:
e i = i e = i,

(10)

e j = j e = j,

(11)

e k = k e = k.

(12)

These properties of the e, i, j, k bases characterize the four-vector product as noncommutative but, what is more important and different with respect to the previous Hamilton and Pauli quaternions as well as to the Clifford
Algebra (see [19], p. 5 axiom 3), the product is in general nonassociative.
This means that the order of the products must be given explicitly, by grouping them with parentheses.
As an example where the order is relevant, consider the following product
of the four basis elements: ((i e) j) k. With the use of 4, reduce i e to i
then by 7 i j to k and finally, by the last relation 3, k k to e. This is
one result. Now consider the same ordering of symbols but with a different
grouping: (i (e j)) k. First reduce the two middle basis elements with the
use of 5 e j to j, then i j to k and then k k to e, we get the
same result but with the sign changed.
If we put these rules into a multiplication table, for four-vectors they
look like this:
**
e
i
j
k

e
e
i
j
k

i
i
e
k
j

j
k
j k
k j
e
i
i
e

10

Diego Sa
a

2.1. Advantages of four-vectors


The four-vector operations have extensive applications in electrodynamics
and relativity. Some of the advantages proposed for the Hamilton quaternions,
Geometric Algebra and Space-Time Algebra, which should be extended to
our new four-vectors, but are not explored here, are:
1. Four-vectors express rotation as a rotation angle about a rotation axis.
This is a more natural way to perceive rotation than Euler angles [22].
2. Non singular representation (compared with Euler angles, for example)
3. More compact (and faster) than matrices. For computation with rotations, four-vectors offer the advantage of requiring only 4 numbers of
storage, compared with 9 numbers for orthogonal matrices [23]. Composition of rotations requires 16 multiplications and 12 additions in
four-vector representation, but 27 multiplications and 18 additions in
matrix representation...The four-vector representation is more immune
to accumulated computational error. [23].
4. The real quaternion units defined by Hamilton together with the scalar
1 (or rather e in our notation) have the advantage to form a closed
four element group, which is not the case with the Pauli-units [24].
5. Every four-vector formula is a proposition in spherical (sometimes degrading to plane) trigonometry, and has the full advantage of the symmetry of the method [25].
6. Unit four-vectors can represent a rotation in 4D space.
7. Four-vectors have been introduced because of their all-attitude capability and numerical advantages in simulation and control [26].
Quaternions have been often used in computer graphics (and associated
geometric analysis) to represent rotations and orientations of objects in 3D
space. This chores should be now undertaken by the four-vectors, which are
more natural, and more compact than other representations such as matrices. Besides, the operations on them, such as composition, can be computed
more efficiently. Four-vectors, as the previous quaternions, will see uses in
control theory, signal processing, attitude control, physics, and orbital mechanics, mainly for representing rotations/orientations in three dimensions.
The spacecraft attitude-control systems should be commanded in terms of
four-vectors, which should also be used to telemeter their current attitude.
The rationale is that combining many four-vector transformations is more
numerically stable than combining many matrix transformations.
2.2. Complex four-vectors
Regularly, four-vectors contain real elements, for example for applications in
geometry. However, the elements handled by complex four-vectors are complex numbers.
The collection of all complex four-vectors forms a vector space of four complex dimensions or eight real dimensions. Combined with the operations of
addition and multiplication, this collection forms a non-commutative and
non-associative algebra. There is no difficulty in obtaining the multiplicative

Algebra for Vectors and an Application

11

inverse of a complex four-vector, when it exists, within four-vector algebra


suggested below. However, there are complex four-vectors whose elements are
different from zero but whose norm is zero. Therefore, complex four-vectors
do not constitute a division algebra.
However, complex four-vectors are very important in the study of electromagnetic fields, as will be seen in the following.

3. Four-vector algebra
A cursory revision of four-vector algebra is performed next. For a more extended analysis of this algebra the reader should refer to a previous paper of
the present author [20].
Let us define two four vectors A and B :
A = eat + iax + jay + kaz
B = ebt + ibx + jby + kbz
3.1. Sum, difference and conjugates
The sum of two four-vectors is another four-vector, where each component
has the sum of the corresponding argument components.
A + B = e(at + bt ) + i(ax + bx ) + j(ay + by ) + k(az + bz )

(13)

The difference of two four-vectors is defined similarly:


A B = e(at bt ) + i(ax bx ) + j(ay by ) + k(az bz ).

(14)

The conjugate of a four-vector changes the signs of the vector part:


A = eat iax jay kaz

(15)

From this definition it is obvious that the result of summing a four-vector


with its conjugate is another four-vector with only the scalar component different from zero. Dividing by two such result, the scalar component is isolated.
The previous operation defines the operator named the anti-commutator or
the Hamiltons scalar operator S : (A + A)/2 = SA. Similarly, the result of
subtracting the conjugate of a four-vector from itself is a pure four-vector
(that is, one whose scalar component is equal to zero), when divided by two
defines the commutator or the Hamiltons vector operator V : (AA)/2 = V A
The complex conjugate or Hermitian conjugate of a four-vector changes
the signs of the imaginary parts. Given the complex four-vector:
A = e(at + ibt ) + i(ax + ibx ) + j(ay + iby ) + k(az + iby )

(16)

Then its complex conjugate is:


A = e(at ibt ) + i(ax ibx ) + j(ay iby ) + k(az iby )

(17)

12

Diego Sa
a

3.2. Four-vector product


Using relations 3 to 9, the four-vector product is given by:
AB =

e(at bt + ax bx + ay by + az bz ) +

(18)

i (at bx + ax bt + ay bz az by ) +
j (at by ax bz + ay bt + az bx ) +
k(at bz + ax by ay bx + az bt ).
With the notation of three-dimensional vector analysis it is possible to
get a shorthand for the product. Regarding i, j, k as unit vectors in a Cartesian
coordinate system, we interpret a generic four-vector A as comprising the
scalar part a and the vector part a = i ax + j ay + k az . Then we write
it in the simplified form A = (a, a). With this notation, the product 18 is
expressed in the compact form:
A B = (a b + a b, a b + a b + a b)

(19)

The product for the alternative algebra, which uses relations 10-12 instead of 4-6, is
A B = (a b + a b, a b a b + a b)
(20)
where the usual rules for vector sum and dot and cross products are
being invoked.
Then, the alternative algebra simply switches the signs of the first and second terms in the vector side of the product and can be computed using the
regular product 19, using conjugates: A B.
The following properties for the product are easily established:
1. If the scalar terms of both argument four-vectors of the product are
zero then the resulting four-vector contains the classical scalar and vector products in its respective components.
2. The product is non-commutative. So, in general, there exist P and Q
such that P Q 6= Q P.
3. Four-vector multiplication is non-associative so, in general, for three
given four-vectors P, Q and R, P (Q R) 6= (P Q) R.
Note that this is different from the Hamilton quaternions and the socalled Clifford Algebras, see for example [27]. It reflects the well known
fact that the associative law does not hold for the vector triple product,
for which: p (q r) 6= (p q) r. Just to provide an example, for
the case of classical vectors, let us assume the three vectors p=(1,5,2),
q=(0,1,0) and r=(1,2,3). Then, the product p (q r) gives (-5,7,-15)
whereas the product (p q) r gives (-2,7,-4).
In order to reproduce this result with the use of four-vectors, the
scalar terms, if any, must be set to zero before performing the products.

Algebra for Vectors and an Application

13

The non-associativity of the product, to account for this property, cannot be found in the quaternions, in geometric or Clifford algebras or in
the standard tensor algebra for four-vectors.
4. The product of a four-vector by itself produces a result different from
zero only in the first or scalar component, which is identified as the
norm of the four-vector. In this sense it is similar to the dot product in
vector calculus:
A A = (a2t + a2x + a2y + a2z , 0, 0, 0)

(21)

Note that this expression is substantially different with respect to


the Hamilton quaternions, in which the square of a quaternion is given
by
A A = (a2t v v, 2 at v),
(22)
where v represents the three-vector terms of the quaternion. Not
only the scalar component has terms with the sign changed, but nonzero term appears in the vector part of the quaternion. This has been a
source of difficulty to apply Hamilton quaternions in Physics, which is
overcome by our four-vectors.
5. The multiplicative inverse of a four-vector is simply the same four-vector
divided by its norm.
3.3. The Magnitude (Absolute Value) and the Norm
The magnitude, or absolute value, of a four-vector is defined as the square
root of the sum of squares of its elements:
q
(23)
|A| = a2t + a2x + a2y + a2z
It can be computed as the square root of the scalar component of the product
A A.
The norm is defined as the square of the absolute value. It can be
computed as the scalar component of the product A A.
3.4. Unit four-vector
A unit four-vector has the magnitude equal to 1. It is obtained by dividing
the original four-vector by its magnitude or absolute value.
3.5. Identity four-vector
The identity four-vector is a unit four-vector that has the scalar part equal to
unity and the vector part equal to zero. Let us denote it with 1 = (1, 0, 0, 0).
It has the following properties, where A is any four-vector:
1 A = A, A 1 = A
As you can see, the 1 is the right identity. The alternative product mentioned
in section 3.2 makes 1 the left identity.

14

Diego Sa
a

3.6. Multiplicative inverse


The multiplicative inverse or simply inverse of a four-vector A is denoted by
A1 , and evaluated as the vector divided by its norm:
A1 = A/|A|2

(24)

The product of the vector by its inverse is the identity four-vector:


A A1 = A1 A = 1

4. Four-vector calculus
We define a four-vector as a set of four quantities, which transform like the
coordinates t, x, y and z. This representation has been enormously successful
in Physics, although, at first sight, it would seem that the time does not
mixes with the spatial coordinates, the electrical charges with the currents
or the energy with the momenta.
The following Subsections describe several four-vectors that resemble and
work as the corresponding ones in Classical Physics. When differences appear they are duly noticed.
In particular, any scientist has to wonder what the effect would be if the
remaining physical variables that still do not have the four-vector form, such
as the electric and magnetic fields, were represented as four-vectors. In Subsections 5.1 and following, such a proposal is explored. In particular, the
electromagnetic four-vector is proposed, which includes the new scalar field
s in combination with the electric and magnetic fields in the vector part of
the four-vector.
This proposal results in a coherent theory that allows deriving the Maxwells
equations, and produces a set of formulas compatible with most of the corresponding classical ones. When some difference appears, such as in the case
of the so-called Lorenz gauge, where the same Classical Physics has had difficulty in proposing just one gauge [28], [29], there appears the electromagnetic
scalar field in a surprising position. In addition, Classical Physics has not been
able to provide a definition for electrical charge in terms of potentials, and,
again, there appears the electromagnetic scalar as a brick for its construction
but without destroying the known relations, such as the equation for chargecurrent continuity.
In the following Subsections, we will first attempt to reproduce some of the
calculus-based four-vectors. Throughout we try to stick to the (-,+,+,+) signature convention, with a Minkowski metric. In order to verify that all the
equations are satisfied, you can begin, for example, with a potential of the
form of equation 107. As a practical recommendation, if all the operators,
such as gradient, divergence and curl are maintained as shown below, then
the time derivative of your B function needs to be multiplied by the square
root of 3 when it appears isolated, such as in Faradays law, 56.

Algebra for Vectors and an Application

15

4.1. Derivatives
The time derivative of a four-vector is defined, as is usual for vectors, deriving
each component separately.
The time derivative, d/dt, of a product of two four-vectors has a form
similar to the conventional derivative of a product, but maintaining the order
(in the following formulae it is assumed that the example four-vectors A and
B are functions of the variable t):
dB dA
d
(A B) = A
+
B
dt
dt
dt
Derivative of the square of a four-vector
If in the previous expression we replace B by the A four-vector:

(25)

d
dA dA
(A A) = A
+
A
(26)
dt
dt
dt
Now if we swap the order of the factors in the last product, we get the
conjugate of the other so, adding both, we note that the vector component is
set to zero. There remains only the scalar component different from zero. The
same can be achieved if we derive the (scalar) obtained by first multiplying
A by itself. This proves that the result of the derivative of the square of a
four-vector is the same either if the four-vector is first multiplied by itself and
then derived or if the derivative rule of a product is applied before deriving
its components. The resulting scalar component is of the form:
dA dA
0, 0, 0)
+
A = (2(a a + b b + c c + d d),
(27)
dt
dt
Derivative of the product of a four-vector by its inverse:
We know that the product of a four-vector A by its inverse A1 is the identity
four-vector, which is a constant. Therefore, in the right-hand side of the
derivative we acquire the null four-vector (zero in all components):
A

d
d
(A A1 ) = (1, 0, 0, 0) = (0, 0, 0, 0)
dt
dt
Or, expanding the derivative of the product:
d
dA 1
dA1
(A A1 ) =
A +A
= (0, 0, 0, 0)
dt
dt
dt
Example: Given the four-vector

(28)

(29)

A = (cos(a t), Log(b t2 ), c/ sinh(t3 ), d)


The time derivative of A is
dA
= (a sin(at), 2/t, 3 c t2 coth(t3 ) csch(t3 ), 0)
dt
The inverse of A and its derivative are much longer and by reasons of space
cannot be included here. However, the reader can verify equivalence 29 by imR

plementing the product 18 in some system such as Maple or Mathematica .

16

Diego Sa
a

4.2. Differential of interval


An arbitrary interval differential is expressed as a four-vector in which each
component is the projection of the interval over each coordinate axis. As an
example, in Cartesian coordinates, we define the four-vector d S or interval
four-vector as follows:
d S = (i c dt, dx, dy, dz)
(30)

Where i is the imaginary unit, 1, here and in the following equations.


The square of the interval is a relativistic invariant, which appears from the
product of the interval four-vector by itself:
d S2 = d S d S = c2 dt2 + dx2 + dy2 + dz2

(31)

Four-vectors offer this great advantage since there is no difference between


the contravariant and covariant forms of a four-vector. The fact is that, when
we use orthonormal sets of coordinates, both sets of basis vectors, that is
covariant and contravariant, coincide and there is no difference in the representation of a vector in each of these basis.
4.3. Four-Velocity
In order to obtain the velocity four-vector, just factor out the time coordinate
differential in the interval four-vector and divide everything by the proper
time differential:
U = (i c, x,
y,
z)

(32)
or concisely
U = (i c, v)

(33)

Where the factor is the quotient of the coordinate time differential divided
by the proper time differential and in practice can be disregarded for small
velocities:
dt
1
=
=p
(34)
d
1 v2 /c2
4.4. Four-gradient
We know that the total differential (magnitude df of an arbitrary scalar
field, given as a function of the time and space coordinates) is
f
f
f
f
dt +
dx +
dy +
dz
(35)
t
x
y
z
From this relation we extract the partial derivatives and separate them
from the interval differential, defined in Subsection 4.2, so that their product restores the magnitude df . In this way we discover the four-vector
appropriate for electromagnetism (later we will conclude that 0 is equal to
1/c):

f f f f
i0 ,
,
,
= f
(36)
t x y z
we recognize this as the four-gradient of the scalar field f . Notice that the
scalar field f is not a four-vector.
df =

Algebra for Vectors and an Application

17

In general, if we suppress the scalar field and leave the rest as an empty
operator, we obtain the four-gradient:

= i0 ,
,
,
(37)
t x y z
and simplifying:


= i0 , = (i t , )
(38)
t
where the three-dimensional vector del is the important Hamilton operator

,
,
(39)
=
x y z
The product of the four-gradient by itself gives the dAlembert operator in
the scalar component of the resulting four-vector. For simplicity, let us write
only the component different from zero:
2
2
2
2
2 +
2 +
2 +
t
x
y
z 2
Or more concisely, using the del () operator:
= 2 = 20

(40)

2
+ 2
(41)
t2
This dAlembert operator generates wave equations when operating on a
scalar field such as the electromagnetic scalar potential, or on a vector field,
such as the electric field.
2 = 20

5. Four-vectors in electromagnetism
Vlaenderen and Waser [30] have proposed a scalar component for the electromagnetic field, and exhibit some reasons to justify its experimental need.
Note however that they include new terms in Maxwells equations, just as
Lyttleton and Bondi or the extended Proca equations do [31]. In all these
cases the equations are different from the classical Maxwells ones and the
origin of such terms is not clear.
Since the twenty century, it is known that every physical variable should be
represented with four components, one time component and three spatial
components. For example, the energy and momentum, the scalar and vector
potentials, or the charge and the current, constitute and are represented as
four-vectors. However, the physicists have not been able to discover the corresponding four-vectors for the the electric and magnetic fields, E and B.
Therefore, let us try to complete the four-vector by assuming that the electromagnetic four-vector, M, includes the scalar component s, in the following
form:
M = (s , m)
(42)
which, expanding the spatial components, is:
M = (s , mx , my , mz )

(43)

18

Diego Sa
a

where the elements of the vector, or spatial, component m are of the form
1 i
mi = i Ei +
B
(44)
0
Or, by using the magnetic field H, we can make disappear from Maxwells
equations all manifestations of the magnetic permeability 0 :
mi = i Ei + Hi

(45)

5.1. Maxwells equations from electromagnetic four-vector


We intend to derive the Maxwells equations from the simple four-vector
product (four-gradient):
M=0
(46)
This represents the product of the four-gradient 38 by the electromagnetic
four-vector 42.
Then, expanding with the four-vector product schema 19:

m
s
+ m, s + i 0
+ m
(47)
M = 0
t
t
or, expanding with 44:
1
s
M = i 0
+ i E +
B,
(48)
t
0

0 B
1
E
+i
+ s + i E +
B
(49)
0
t
0 t
0
To reach to Maxwells equations let us equate this four-vector to zero. Each
of the real and imaginary components must be equated to zero independently
(so it is possible to simplify the imaginary units):
E = 0 s
t
1
0

(50)

B=0

(51)

E = 00 B
t
1
0

B=

0 E
t

(52)
(53)

Let us compare these with the well-known Maxwells equations:


Gauss electric field law:
E=

0

(54)

Gauss magnetic field law:


B = 0,

(55)

Faradays law:
E = B
t ,

(56)

Amperes law:
1
0

B = 0

E
t

+J

(57)

Our intention of generating the Maxwells equations by taking the fourgradient of the electromagnetic four-vector has been almost fulfilled. We

Algebra for Vectors and an Application

19

would be done if the set of equations 50-53 had become identical to the
set 54-57.
Let us try to identify the differences that appeared in the first, third and last
equations, and how to overcome such differences, if possible.
First, in order to unify the last equation of each set, the gradient of the
scalar of the electromagnetic field must be equal to the current density, that
is
s = J
(58)
Not every vector field has a scalar potential; those which do are called conservative. And conversely, it is known that if J is any conservative or potential
vector field, and its components have continuous partial derivatives, then it
has a potential with respect to a reference point, of the form J = s.
Therefore, the current density vector J satisfies the conditions of a conservative field. Besides, from classical vector analysis we know that the curl
of any gradient is zero, so the curl of J is zero, J = 0.
Consequently, applying Stokes theorem to this curl, the line integral of
J around any closed loop is zero:
I
J dl = 0
(59)

Also, following a similar reasoning as in section 2.3 of Griffiths [32], we can


conclude that the electromagnetic scalar between a reference point O and a
point r is independent of the path and given by the line integral:
Z r
s(r) =
J dl
(60)
O

Second, we want that the first equations of both sets be identical with
each other. This is achieved when the time derivative of the scalar component
of the electromagnetic field, s/t, is made identical to the electric charge
density, , divided by the square of the permittivity, 0 :
s
= /20
(61)
t
As will be seen later, all physical variables considered in the present
paper satisfy the dAlembert wave equation with constant coefficient 20 . This
means that the propagation speed of the electromagnetic waves, which is the
speed of light, is some function of the absolute permittivity of vacuum, and
vice versa.
The dAlembert wave equation with coefficient 20 requires that 0 be
interpreted as equal to the inverse of the speed of light. Now, since in our
theory 0 is independent from 0 , we might preserve, or not, the known
relation:
1
= 2.998 108 m/s
(62)
c=
0 0

20

Diego Sa
a

Let us assume that 0 is defined also as equal to the inverse of the speed of
light, so the previous relation is satisfied:
1
0 = 0 = .
(63)
c
Physicists are aware that the choice of units of many universal constants,
such as 0 and 0 , is completely arbitrary in current Physics. For example,
Prof. Littlejohn of University of California at Berkeley expresses the following
in his lecture notes on Quantum Mechanics: In Gaussian units, the unit of
charge is defined to make Coulombs law look simple, that is, with a force
constant equal to 1 (instead of the 1/40 that appears everywhere in SI
units). This leads to a simple rule for translating formulas of electrostatics
(without D) from SI to Gaussian units: just replace 1/40 by 1. Thus, there
are no 0 s in Gaussian units. There are no 0 s either, since these can be
expressed in terms of the speed of light by the relation 0 0 = 1/c2 . Instead
of 0 and 0 s, one sees only factors of c in Gaussian units. [33].
The definition of 0 , as the inverse of the speed of light, is strictly necessary
for 0 , since the electromagnetic waves displace at the speed of light, but 0
does not appear to have such requirement.
Therefore, assuming that 0 = 1/c, let us replace it in the known relation: q 2 = 2 h 0 c, [34]. This imposes the requirement that the elementary
charge, q, be redefined as equal to the square root of Plancks constant, h,
multiplied, for macroscopic applications, by double the fine structure constant
:

(64)
q= 2h
1

Therefore, the dimensions of charge become [M 2 LT 2 ].


From this relation we can compute the conversion constant, let us name it C,
used to convert the electrical units to mechanical units. When this conversion
constant is used, the electrical units, such as the coulomb and ampere, are not
anymore indispensable, except for compatibility with previous knowledge:

1
1
2h
C=
[M 2 LT 2 coul1 ]
(65)
q
The square of C divided by 2 is the von Klitzing constant (about 25812.808
ohm). CODATA 2002 defines this constant as independent and has about
seven digits precision (in 1990 the CIPM adopted exact values for the von
Klitzing constant). With the above mentioned proposal the von Klitzing constant is a function of Planck constant and of the value of the elementary
charge. The number of correct digits can be duplicated. Several other constants such as the elementary charge, Plancks constant, fine structure constant and electron mass, can also be obtained with several additional digits
of precision by making use of the quantum Hall conductance measurements.
This paper is not about physical constants so I cannot go farther on this.
However, the above hints should be enough for the experts to use profitably
to improve the precision of several constants.

Algebra for Vectors and an Application

21

Using equations 50-53 let us prove that the electric and magnetic fields
satisfy the dAlembert wave equation. In both cases it is necessary to use the
following vector identity, for any vector X:
( X) = ( X) 2 X

(66)

First, let us take the time derivative of the proposed Faraday equation 52:
0
E
2B
=
t2
0
t

(67)

Replacing here the definition of E/t obtained from the proposed Ampere
equation 53:
0
1
1
2B
= (
B + s)
2
t
0
0 0
0

(68)

By vector calculus, the curl of any gradient is always zero, so the second term
at the right-hand side vanishes. Applying vector equivalence, 66, and using
Gauss equation for the magnetic field, 51, the result 2 B = 0 is immediate:
20

2B
+ 2 B = 0
t2

(69)

In a similar form, for the electric field, let us take the curl of the proposed
Faradays equation, 52, and use the vector equivalence 66:
( E) 2 E =

0
B

0
t

(70)

Replacing the divergence of E by its equivalent from the proposed Gauss


equation for the electric field, 50, we find the equation:
0

0
B
s 
2 E =
t
0
t

(71)

Equating this to the time derivative of the proposed Ampere equation, 53,
multiplied by 0 , the result 2 E = 0 is immediate:
20

2E
+ 2 E = 0
t2

(72)

After some standard and very well known operations within the study of
electrodynamics, we have arrived to the corresponding dAlembert equation
(or four-dimensional Laplace equation) for the spatial components, E and
B, of the electromagnetic field.
In the following Subsections, similar wave equations are inferred for
other physical variables.

22

Diego Sa
a

5.2. Electromagnetic four-vector from potential four-vector


Let us define the potential four-vector as:
A = (i , A)

(73)

To obtain the rank 2 electromagnetic tensor (Faradays tensor) (here


multiplied by the speed of light), in current Physics it is necessary to carry
out the following tensor operations [32]:
F = A A
(74)
All the signs of the elements of this rank 2 tensor appear in the same order as the one provided in the product for four-vectors, defined in section 3.2:

0
Ex
Ey
Ez
E x
0
c B z c B y

F =
(75)
y
z
E
c B
0
c Bx
z
y
x
E
cB
c B
0
With four-vectors we defined the simple electromagnetic four-vector 42,
which contains more information than this rank 2 tensor (the four-vector includes information about the scalar component, which is new to Physics, and
the four-vector product 18 generates the correct signs in identical positions
as in Faradays tensor 75).
Therefore, in covariant notation, Faradays tensor should be written as:
F = A A + g s

(76)

The scalar field s and the electromagnetic fields E and B can be defined
by performing the following four-vector product:
M = A

(77)

Replacing the four-gradient by its definition 38, where the inverse of the
speed of light is assumed from now on for both 0 and 0 , by relation 63, and
expanding this product with 19 we get

1 A
1
A, i
i + A
(78)
M = (s, i E + cB) =
c t
c t
Equating the corresponding real and imaginary components of this identity,
we find the definition of the electromagnetic scalar in terms of potentials as:
1
s=
A
(79)
c t
Its form is identical to the Lorenz gauge. This gauge is normally assumed
equal to zero in current Physics. However, if this were the case, the electromagnetic scalar, s, would be zero and, according to the present theory, we
would get the homogeneous Maxwells equations. This means that the Lorenz
gauge amounts to assuming that the charge and current densities are zero.
Precisely some studies conclude that the Lorenz gauge is unphysical. For example, in [35], the authors say we see that the Lorenz gauge is in conflict

Algebra for Vectors and an Application

23

with the physical phenomena. This is a crash of the Lorenz gauge. Now it is
essential to find a new transformation of the equations.
The spatial components contain the well known definitions for the electric
field in terms of potentials:
E=

1 A

c t

(80)

and of the magnetic field:


cB=A

(81)

Substituting these definitions into 50-53, they simplify into identities


and wave equations for the potentials and A. Thus, replacing first in Gauss
electric field law, 50, we derive the wave equation for the scalar potential :
1 s
(82)
c t
 1

1 A
1

=

A
(83)
c t
c t
c t
Both terms containing A are identical since the time and space derivatives
can be interchanged. Then
E=

1 2
+ 2 = 0
(84)
c2 t2
Which is the wave equation for the scalar potential, 2 = 0.
Now, by substituting the definition of the magnetic field into 51, produces:

A =0
(85)

Which is an identity because the divergence of a curl is always zero.


Next, let us take Faradays law, 52,
B
(86)
t
and replace in it the definitions of the electric and magnetic fields in terms
of potentials:


1 A
1

=
A
(87)
c t
c t
The time and space derivatives of the A potential from the first term can
be interchanged and in this way becomes identical to the first term in the
right-hand side, being simplified. The curl of a gradient is always zero, so we
obtain an identity.
Finally, from Amperes law, 53,
E=

1 E
s
(88)
c t
by replacing the definitions of the fields in terms of potentials we derive the
wave equation for the vector potential :
 1


1 A
1
A =


A
(89)
c t
c t
c t
c B =

24

Diego Sa
a

Apply the vector identity 66 to the left-hand side term and simplify the two
terms containing :


1 2A
A 2 A = 2 2 + A
c t

(90)

Then simplify the terms at the margins, obtaining 2 A = 0:

1 2A
+ 2 A = 0
c2 t2

(91)

5.3. Charge-current four-vector


Equation 58 defines the current as the gradient of the electromagnetic scalar.
We can take the curl of both sides of such equation, with which the left-hand
side becomes zero because the curl of any gradient is zero. This proves that
the (local) circulation of current is zero. This is new to Physics.
Next we can derive the equation of conservation of charge by obtaining the
divergence of the classical Amperes law, 57, and replacing the divergence of
E by its definition in 54. But, instead of that, let us take the divergence of
our new equation 53. The left-hand side becomes zero because the divergence
of any curl is zero. We obtain:
1
( E) + (s)
c t
Using the Gauss electric field law, 50:
0=

(92)

1 1 s
(
) + (s)
(93)
c t c t
From here and using equations 61 and 58 it is easy to recover the conservation
of charge equation:

(94)
J=
t
However, equation (93) is equivalent to this one and constitutes the wave
equation for the electromagnetic scalar, 2 s = 0:
0=

1 2s
2 s = 0
(95)
c2 t2
There is no need to continue with these operations. It is more profitable to
question whether four-vectors can generate this kind of equations. The answer
is, of course, in the positive. First, we have to apply the gradient four-vector,
38, to the negative of the electromagnetic scalar, s, with which we obtain the
current four-vector :

1 s
, s
(96)
c t
In other words, again by equations 61 and 58, we arrive at the classic chargecurrent four-vector

J = i c, J
(97)
J = (s) = i

Algebra for Vectors and an Application

25

5.4. Generalized charge-current continuity equations


Next, apply (of course, this means apply the four-vector product) the gradient four-vector to the current four-vector just defined:


1
J = i
, i c, J
(98)
c t
The resulting four-vector, equated to zero, produces three equations, which
constitute the generalized charge-current continuity equations. In particular,
the first one is the well known equation for conservation of charge:

+J=0
(99)
t
1 J
+ c = 0
(100)
c t
J=0
(101)
The second equation can also be derived by taking the gradient of Gauss
electric field law, equating with the time derivative of Amperes law, applying the vector equivalence 66 and simplifying the emerging wave equation for
the electric field, which is known to be zero.
Similarly, the last expression is a completely reasonable vector equation
within our theory, since in equations 96 and 97 (or in 58) we had defined
the current (3D) vector as the gradient of the electromagnetic scalar. Remembering a theorem (see Feynman [21]) of differential calculus of vectors
that says that the curl of a gradient is zero, the identity is proved.
Finally, taking the time derivative of 99 and subtracting the divergence
of 100 we find that the charge density satisfies the wave equation 2 = 0:
1 2
+ 2 = 0
(102)
c2 t2
Also, take the time derivative of 100, subtract the gradient of 99, apply
the vector equivalence 66 and simplify with 101. With which we conclude
that the current density satisfies the wave equation 2 J = 0:

1 2J
+ 2 J = 0
c2 t2

(103)

5.5. Some electromagnetism in classical Physics


Classical electromagnetic theory finds inhomogeneous wave equations for
both potentials and electromagnetic fields, whereas we found homogeneous
equations for all physical variables. Jackson, [38] p.246, shows the equation
for the B field:

1 2B
2 B 2 2 = 0 J
(104)
c t
The curl at the right-hand side of this equation should be zero if the
current density, J, constituted a conservative field; since, in such situation, it
would be equal to the gradient of some scalar potential. With this, the curl
of a gradient is automatically zero.

26

Diego Sa
a

Jackson, [38] p.246, also shows the equation for the E field:
1
1 2E
1 J 
=
2
(105)
c2 t2
0
c t
Whose right-hand side is, again, considered as different from zero in
classical Physics, whereas in our theory it is immediately zero after replacing the definitions of charge and current densities in terms of our proposed
electromagnetic scalar.
On the other hand, for the case of the scalar and vector potentials,
Jackson, [38] p.240, shows how to uncouple the equations for these variables,
through the use of the so called Lorenz condition, which arbitrarily equates
to zero the definition of the electromagnetic scalar of our theory, effectively
forcing to zero the charge and current densities, which conform the right-hand
sides of the inhomogeneous equations for potentials.
2 E

5.6. Solution of wave equation


Suppose that a function of space and time u(t, x, y, z) satisfies the partial
differential equation 2 u = 0:
1 2u 2u 2u 2u
+
+ 2 + 2 =0
(106)
c2 t2
x2
y
z
where c is a constant with the dimensions of speed. The classical solution of
this equation is a periodic function. On the other hand, Coulombs law and
the potentials, in the solutions obtained by Lienard and Wiechert, are not
periodic. One might wonder how are we going to reproduce such results. As
long as the present writer knows, Coulombs law has never been derived from
first principles, and even Newtons law of gravitation has the same form and,
therefore, it is very probable, and rather obvious, that its origin is also in the
wave equation.
The answer is that we have to pick the new form as the ansatz of the solution:
a
u(r, t) =
(107)
tkr
This form (for electromagnetic and gravitational potentials), or some
small integer power of it (square for gravitational, electric and magnetic
forces), avoids the infinities for radius close to zero, and is a promising nonharmonic solution also for problems in other areas of Physics. In particular,
the present author proposes this form to describe the law of gravitation.
Direct substitution in the wave equation shows that an arbitrary function
u(r, t) = f ( tkr), such as the suggested above, or any linear combination
of such solutions, satisfies the wave equation, where is angular velocity, k
is a wave vector pointing in any direction, and r is a position vector from an
arbitrary origin. The a is an appropriate constant of charge, charge density,
amplitude of electric field, etc., depending on the problem being solved.
After replacing the proposed solution in the wave equation, the dispersion
relation is obtained:
= c k
(108)

Algebra for Vectors and an Application

27

5.7. Covariance of physical laws


In section 4.2 it was shown the form of the differential of interval. Now we
have to question whether such form is preserved when a relativistic boost is
applied.
Let us assume the usual Lorentz transformations, which define a boost in the
x direction:
q
2
1 vc2 ,
)/
(109)
dt0 = (dt vdx
2
c
q
2
(110)
dx0 = (dx vdt)/ 1 vc2 ,
dy 0 = dy,
0

dz = dz.

(111)
(112)

Then, let us replace these values into the space-time four-vector


dS = (i c dt0 , dx0 , dy 0 , dz 0 )
(113)
and let us compute its square, via the standard four-vector product,
dSdS. The reader can verify that this four-vector product preserves the square
of the interval:
(c2 dt2 + dx2 + dy 2 + dz 2 , 0, 0, 0)
(114)
This is important for guaranteeing that the four-vector product preserves the covariance of physical laws. As Feynman explains, The fact that
the Maxwell equations are simple in this particular [four-vector] notation is
not a miracle, because the notation was invented with them in mind. But the
interesting physical thing is that every law of physics[...] must have this same
invariance under the same transformation. Then when you are moving at a
uniform velocity in a spaceship, all of the laws of nature transform together
in such a way that no new phenomenon will show up. It is because the principle of relativity is a fact of nature that in the notation of four-dimensional
vectors the equations of the world will look simple. [21]
This is the reason why it is of paramount importance to represent all
physical variables with four-vectors and, besides, that the operations over
them should preserve their form. As we have seen, current Physics does not
integrate the electromagnetic fields E and B into a four-vector, as was explained before, because it lacks the equivalent of the time component, which
is represented by our scalar field, s. In the following section we will reveal
other formula where current Physics has incorrectly dismissed four-vectors.
5.8. Electromagnetic forces
Engelhardt [36] explains that either the Lorentz force, or the field equations,
or both must be suitably modified to account for the force on a particle in
its rest-frame. It is, of course, well known that the Lorentz force must be
modified anyway to include the effect of radiation damping, when a charge
produces electromagnetic waves due to strong acceleration. Whether a modification of the Lorentz force alone leaves [Maxwell] equations intact, is an

28

Diego Sa
a

open question. In 1890 Hertz was aware of the fact that the final forms of the
forces are not yet found (emphasis in the original).
In the following, a new formula is suggested for completing the computation
of electromagnetic forces, without affecting the Maxwell equations.
Quantum phenomena such as the Aharonov-Bohm effect has received explanations appealing to the potentials but not to the electromagnetic fields or
the Lorentz forces: in an ideal experiment, the electron sees no B or E fields,
though it does traverse different potentials A and V. [37]
In the present paper, both aspects have been improved, with the addition
of the scalar electromagnetic field and the additional term for force that is
proposed below, as a function of the current (flow of electrons in the A-B
effect).
The forces over a test charge are computed in classical Physics by means
of the classical Lorentz force equation, by multiplying the charge, q, by an
expression reminiscent of the electromagnetic four-vector:

F=q E+vB

(115)

As should be clear, the charge does not constitute a four-vector, and the appearance of the velocity vector v is not very natural.
We propose the following formula to compute the forces associated with
currents in an electromagnetic field, as an extrapolation and correction of the
previous one. It is reached by multiplying the inverse of the speed of light by
the current four-vector and by the electromagnetic four-vector (see Jackson
[38], p. 611):
F=



1
J M = i c, J s , i E + c B
c

(116)

By expanding the indicated product we get:


F = i( s +


1
1
s
J E) + J B, i ( J E cB) + ( J + E + J B) (117)
c
c
c

Whittaker [39], proposed a scalar force. Jackson [38], in page 611, except for the minor appearance of the speed of light in the last term, shows
our second scalar term, that is 1c J E, together with the two classical vector
terms, which appear at the end of (117).
Prykarpatsky and Bogolubov [40] and Martins and Pinheiro [41] have obtained part of our first real vector term, (s J/c), since our definition of the
electromagnetic scalar in terms of potentials includes the divergence of the
vector potential (refer to expression (79) ), which those authors multiply by
the charge and velocity. This is simply recognized as current, J.

Algebra for Vectors and an Application

29

6. Discussion
Four-vectors, in the form proposed by the present author, emerge in this
paper as a possibly more appropriate mathematical tool for the handling of
vectors and the study of fundamental physical variables and their describing
equations.
This new mathematical structure seems to be a formalization of the classical vectors. Its simplicity contributes to the possibility of more extended and
fruitful uses in all branches of science.
As an illustration of such applications, four-vectors have allowed, in this
paper, to identify a new component of the electromagnetic field, which is the
electromagnetic scalar. At the difference of [30, eqs. (57)-(60)], our electromagnetic scalar does not require to be artificially appended to Maxwells
equations, but constitutes intrinsic part of their derivation and structure.
All the classical physical variables mentioned in the present paper, such
as charge and current densities, scalar and vector potentials, electric, magnetic and the new electromagnetic scalar fields have been proved here to
satisfy the homogeneous wave equation, which gives a strong argument to
conclude that our universe is of a single wave-like constitution.
In this paper, the author proposed several wave equations, where some of
them are new or different to the ones of standard Physics. This is important,
but also quite risky for the present theory, since current Physics is quite well
tested. However, a new scientific theory, with the aim of being worthy and
successful, should really understand and describe how nature works. Moreover, it should be falsifiable, and make testable predictions of experimental
outcomes not yet put to test.
In the reader should remain the questioning whether the theory that has been
exposed represents reality or not. To answer this, numerous differences have
been proposed in this paper with respect to the current accepted models,
which should make it easy for the physicists to locate the discrepancies, and
reject the theory if points are found where it is inconsistent with experiment.
As with every theory, it is up to the practitioners, in this case the physicists
and mathematicians, to locate the failures or problems, if any.
The reproduction of many known equations, some of them rather complex,
such as all the Maxwells equations, originated on simple four-vector products, provides a very strong reassurance in favor of the proposed vector algebra as one of the most correct and powerful mathematical tools to apply in
Physics.
The periodic solutions of the wave equation are well known, but the
non-periodic solutions are not known, or have been ignored, despite their
importance to Physics. The classical Coulomb and Newtonian equations for
the electrostatic and gravitational forces and potentials seem to be traceable

30

Diego Sa
a

to the wave equation, which is rather natural after acceptance of equation


64 for the conversion between mass and charge. The present author has postulated the non-periodic solutions of the dAlembert equation as the correct
expressions for these phenomena.
The reader should have noticed that the gradient of several four-vectors,
such as the electromagnetic and the current four-vectors, as well as all the
wave equations, were equated to zero in order to generate the Maxwells equations and others. Why should it be so? The present author does not have the
answer. The question should be posited to Nature and is left to the reader
to discover. This problem has the signature of the situations mentioned in
Subsection 2.2, according to which some complex four-vectors may have zero
magnitude despite the fact that they are non-zero. Feynman [11] in chapter
25 concludes that All of the laws of physics can be contained in one equation. That equation is U = 0 [his emphasis]. Our statement is stricter, in the
sense that the relations satisfied by the physical variables are not arbitrary
but all of them are just standard linear wave equations (dAlembert equation).
The proposal of changing the existing definitions of the dielectric permittivity and magnetic permeability of vacuum appears as too radical. However,
with the proviso that the theory proposed in this paper is correct and even
without it, the new definitions allow to simplify and simultaneously preserve
the coherence of all Physics.
A proposal was given to dismiss the classical electromagnetic units called
coulomb and ampere. Therefore, the mechanical units, kilogram, meter
and second, are the only ones that remain as required to study Physics in
general and electromagnetism in particular.

Acknowledgment
The author wishes to express his gratitude to Dr. Delbert Larson for performing a comprehensive and exhaustive revision of the paper and for providing
multiple suggestions for improvement. Also, thanks to Dr. Cesar Costa of
Escuela Politecnica Nacional, who provided some important observations. Of
course, any remaining errors are full responsibility of the present author.
Finally my deep thanks to Dr. Bertfried Fauser, who kindly provided the
respective endorsement to a slightly different version of the present paper, in
order that it be published in ArXiv, no matter if it was later suppressed publication by anonymous managers of such Cornell web site, without providing
any explanation.

Algebra for Vectors and an Application

31

References
[1] L. Silberstein. Quaternionic form of relativity. Phil. Mag, 23:790809, 1912.
[2] Martin Erik Horn. Quaternions in university-level physics considering special
relativity. Arxiv preprint physics/0308017, 2003.
[3] S. de Leo. Quaternions and special relativity. Journal of Mathematical Physics,
37:295568, June 1996.
[4] J. C. B
aez. The Octonions. ArXiv Mathematics e-prints, May 2001.
[5] R. Mukundan. Quaternions: From classical mechanics to computer graphics,
and beyond. In Proceedings of the 7th Asian Technology conference in Mathematics, pages 97105. Citeseer, 2002.
[6] Martin Greiter and Dirk Schuricht. Imaginary in all directions: an elegant
formulation of special relativity and classical electrodynamics. European J.
Phys., 24(4):397401, 2003.
[7] James Clerk Maxwell. A treatise on electricity and magnetism. Dover Publications Inc., New York, 1954. 3d ed, Two volumes bound as one.
[8] A. Gsponer and J.P. Hurni. Quaternions in mathematical physics (2): Analytical bibliography. Arxiv preprint math-ph/0511092 v3, 2005.
[9] G. Martnez Sierra, B. Poirier, and P. Francois. Una epistemologa hist
orica del
producto vectorial: Del cuaterni
on al an
alisis vectorial. Latin-American Journal
of Physics Education, 2(2):16, 2008.
[10] J.W. Gibbs and E.B. Wilson. Vector analysis: a text-book for the use of students of mathematics and physics. Yale bicentennial publications. Dover Publications, 1931.
[11] M.J. Crowe. A History of Vector Analysis: The Evolution of the Idea of a
Vectorial System. Dover Books on Mathematics Series. Dover Pub., 1967.
[12] O. Heaviside. Electrical papers. 2 Volumes. London. Macmillan and Co. XX +
560, XVI + 587 S , 1892.
[13] C. C. Silva and R. de Andrade Martins. Polar and axial vectors versus quaternions. American Journal of Physics, 70:95863, September 2002.
[14] D. Hestenes. Space-time algebra. Documents on modern physics. Gordon and
Breach, 1966.
[15] D. Hestenes and G. Sobczyk. Clifford Algebra to Geometric Calculus: A Unified
Language for Mathematics and Physics. Fundamental Theories of Physics. D.
Reidel, 1987.
[16] J. Lasenby, A.N. Lasenby, and C.J.L. Doran. A unified mathematical language
for physics and engineering in the 21st century. Philosophical Transactions of
the Royal Society of London. Series A: Mathematical, Physical and Engineering
Sciences, 358(1765):2139, 2000.
[17] Chris Doran and Anthony Lasenby. Geometric algebra for physicists. Cambridge University Press, Cambridge, 2003.
[18] D. Hestenes. Oersted medal lecture 2002: Reforming the mathematical language of physics. American Journal of Physics, 71:104, 2003.
[19] G. Casanova. L`
algebre de clifford et ses applications. Advances in Applied
Clifford Algebras, 12(1), 2002.
[20] D. Sa
a. Fourvector algebra. ArXiv e-prints, November 2007.

32

Diego Sa
a

[21] Richard P. Feynman. Feynman lectures on physics. Addison Wesley Longman,


September 1970.
[22] E.B. Dam, M. Koch, and M. Lillholm. Quaternions, interpolation and animation. Datalogisk Institut, Kbenhavns Universitet, 1998.
[23] E. Salamin. Application of quaternions to computation with rotations. Unpublished Internal Report, Stanford University, Stanford, CA, 1979.
[24] A. Gsponer and J.P. Hurni. The physical heritage of Sir WR Hamilton. Arxiv
preprint math-ph/0201058, 2002.
[25] P. G. Tait. Encyclopdia Britannica, volume 20. Encyclopdia Britannica,
1886.
[26] B.L. Stevens and F.L. Lewis. Aircraft Control and Simulation. John Wiley,
2003.
[27] G. Arag
on, JL Arag
on, and MA Rodrguez. Clifford algebras and geometric
algebra. Advances in Applied Clifford Algebras, 7(2):91102, 1997.
[28] J.D. Jackson and L.B. Okun. Historical roots of gauge invariance. Reviews of
Modern Physics, 73(3):663, 2001.
[29] J.D. Jackson. From Lorenz to Coulomb and other explicit gauge transformations. American Journal of Physics, 70:917, 2002.
[30] K.J. van Vlaenderen and A. Waser. Generalisation of classical electrodynamics
to admit a scalar field and longitudinal waves. Hadronic journal, 24(5):60928,
2001.
[31] V.V. Dvoeglazov. Essay on the non-maxwellian theories of electromagnetism.
Arxiv preprint hep-th/9609144, 1996.
[32] D.J. Griffiths. Introduction to electrodynamics, volume 3. Prentice Hall New
Jersey, 1999.
[33] R. Littlejohn. Lecture notes, Appendix A. Physics 221B - Quantum
Mechanics - Spring 2011 - University of California, Berkeley URL:
http://bohr.physics.berkeley.edu/classes/221/1011/221.html, 2011.
[34] G. P. Shpenkov. On the fine-structure constant physical meaning. Hadronic
Journal, 28(3):337372, 2005.
[35] VA Kuligin, GA Kuligina, and MV Korneva. Analysis of the Lorenz Gauge.
Apeiron, 7:12, 2000.
[36] W. Engelhardt. On the relativistic transformation of electromagnetic fields.
Apeiron, 11(2):309326, April 2004.
[37] Herman Batelaan and Akira Tonomura. The Aharonov-Bohm effects: Variations on a subtle theme. Physics Today, 62, 2009.
[38] John David Jackson. Classical electrodynamics. Wiley, New York, NY, 3rd ed.
edition, 1999.
[39] E.T. Whittaker. A History of the Theories of Aether and Electricity from the
Age of Descartes to the Close of the Nineteenth Century. Longmans, Green and
co., 1910.
[40] Anatoliy K. Prykarpatsky and Nikolai N. Bogolubov. The Maxwell electromagnetic equations and the Lorentz type force derivation-the Feynman approach
legacy. Int. J. Theor. Phys., 51(1):237245, 2012.

Algebra for Vectors and an Application

33

[41] Alexandre A. Martins and Mario J. Pinheiro. On the electromagnetic origin of inertia and inertial mass. International Journal of Theoretical Physics,
47(10):27062715, 2008.
Diego Sa
a1
1
Emeritus. Departamento de Ciencias de Informaci
on y Computaci
on, Escuela
Politecnica Nacional, Ladr
on de Guevara E11-253, Quito Ecuador. Tel. (593-2)
2567-849.
e-mail: diego.saa@epn.edu.ec

Das könnte Ihnen auch gefallen