Sie sind auf Seite 1von 17

Everyone is now talking about the recent results from CERN, regarding the FTL speeds recorded (after

taking all possible errors into consideration) for neutrinos, one of the fundamental particles of Standard Model. Neutrinos have always been a source of trouble for physicists! The Standard Model was designed on the basis that neutrinos have no mass, and up till the 90's this assumption was very much acceptable. Requiring massless neutrinos was also a motivation from cosmology: if the neutrinos exceed a certain, small threshold for their mass (around 50 eV, that is around 9 x 10-29g), then given the sheer number of neutrinos in the Universe, it would collapse on itself rather than expand as is observed experimentally. But this problem can still be solved if the all the three neutrinos (electron-, muon- and tau-) have a combined mass of less than 0.3 eV, and one experiment in 2010 [1] did find experimental verification for that (around 0.28 eV). The current issue, that has spread like wildfire in scientific media, is more serious than the mass-problem, if it turns out to be true. Even though the speed of light being an upper limit in the universe is accepted as a postulate for relativity, there is also a logic behind this assumption. To understand this logic, we need to consider the very simple theory of electromagnetic fields. From the Maxwell's equations in vacuum (where only the maximum speed can be possible reached):

we see from the last equation that time variations in electric field E lead to generation of magnetic fieldB and vice-versa from the third equation. This implies, that once started, electromagnetic fields are aselfpropagating phenomenon (in vacuum, at least): it will continue to power itself in vacuum, as long as there are no obstacles in the way which dissipate the energy in the fields. It is this self-propagating nature that makes the speed of light, c the ultimate speed in the universe. Equivalently, the constancy of speed of light (in any inertial frame) can be proved by the assumption that photon is massless. To get a "feel" of how c is the upper limit, consider a though experiment. Say you have a spaceship that has all the necessary machinery to accelerate instantaneously to any desired speed. Note that your spaceship is not a self-propagating system like light, as you need to give constant energy to the ship to make it move. This need arises, of course because in order to create an acceleration on a massive body, you need a certain finite force always (from Newton's second law). Now let there be a source of light next to you at the "starting point", from where you are about to race with the light beam. We now consider the situation just after the race has started. In order to be faster than the light beam, you need to move at a rate greater than light. That is, you need to cover more distance in a given instant of time than the light. Then say that at a time instant t0 you accelerate your ship to move faster than the rate of change of the electric field of the light just next to you. But while you powered your engine, transmitting energy through all the levers and hydraulics (electronics, mechanics, whatever the mode), remember you did all this through massive objects. Each of these objects were in turn in an initial inertia, before the "order" to change it came from you. Due to inertia alone, there will be a little delay in the system always to come to the required rate. But check what has happened in all this delay with the electric field: As soon as the E was generated at a point, it lead to creation of B at a corresponding point in a plane perpendicular to it. This B in turn will transfer energy to generate the next E, and that to at the rate of c. Energy was oscillating simultaneously (due to light being massless, and hence having no inertia; I will come to the point of quantum effect later) in E and B, and the next point for creating E happened at the rate of c, while the spaceship had to struggle through its own internal massive parts to transfer the energy in the first place. Even if, by some magic, the rate of c is achieved in the next step (which is analogous to the creation of next E at the rate of c in the light beam), the ship will always incur a considerable delay in the first step to always keep its total speed less than c. This way, no matter how hard you may try, the speed of your ship remains effectively less than c. Similar mechanism is applicable for any massive body in this universe; due to the existence of mass, inertia will always prevent a change in state (even though if it is a change in speed from v1 to v2) and this delay will always keep the speed less than the speed for a massless particle, that is, c. The point to understand is that relativity does not just give a special treatment to light. What it says is that only massless particles can travel at the fastest rate possible in this universe. The number for that rate (whether 3 x 105 km/s or 186,000 mps) is determined by experiments and convention used for measurements. Now for the consideration of the quantum effect. The important point here is that the "mass" we give light, when it acts as quanta of energy (photons) is when it interacts with some form of matter. In vacuum, such effects are not there (even the quantum fluctuations are not that prominent because the time of existence of these fluctuations to actually make any significant change is extremely small, surpassing the Planck time easily), and the classical descriptions works perfectly. References [1] S. A. Thomas, F. B. Abdalla and O. Lahav, "Upper Bound of 0.28 eV on Neutrino Masses from the Large Photometric Redshift Survey", Phys. Rev. Lett. 105, 031301 (2010).

We consider here an elementary analysis of General Relativity. We will be using the same contravariant-covariant notation as used in past notes on Lagrangian Mechanics. All discussion assumes a unit system where the speed of light is 1 (c = 1). We start by considering the Lorentz invariant space-time interval d in the Minkowskian (flat) space-time:

where repeated indices imply summation, and d is the infinitesimal displacement vector along the direction. The fundamentals of GR are related to study of this interval, and especially the change in the metric (in this case, ) under different systems. The geometrical interpretation of GR states: the presence of massenergy terms (expressed by T, the energy-momentum tensor), affects the metric (generically written as g instead of ), which distorts the spacetime manifold in such a way that the proper time interval d is unchanged. Along with this, we need the Equivalence Principle, which states that For every point in spacetime, there is a locally inertial co-ordinate system, such that the effect of gravity is not observed in that co-ordinate system (or, in other words, the particle under consideration is in free fall in this system). This is one of the variations of the equivalence of inertial and gravitational mass, and more helpful in the mathematical analysis. Hence, in this locally inertial co-ordinate system , we have the equations of motion of a free particle:

which is Newton's second law in the absence of an external force. Of course, if there were any other force besides gravity, then we could have added that on the RHS. Then, we can write the general expression, in a locally inertial frame about the space-time point under consideration, as:

here, it should be understood that f can be any force (per unit mass), except gravitational. This is also Newton's Law in Special Relativity. However, under a general co-ordinate transformation:

we get, taking 's as a function of the x's:

note that here every the free index on every term is the same (in this case, ). is the Affine connection , defined as:

Also, under this transformation, we see a change in the metric for the non-inertial frame and hence the invariant proper time d can be written as:

where

is the metric in the non-inertial frame. Hence, as we see, the changes in going from a Minkowskian, inertial frame (that is free of gravity) to a non-inertial frame (that is affected by gravity) is the change in the metric tensor and the introduction of the affine connection. The affine connection is not a tensor, as will be clear once we study the transformation rules for vectors and tensors. The affine connection is related to the gravitational metric g, as:

We now find the metric for some familiar co-ordinate systems. For the Plane, 2D geometry, the distance between any two closely spaced points (x, y) and (x+dx, y+dy) can be written as: ds2 = dx2 + dy2 Comparing it with the general expression, we can write the metric associated to a 2D plane in matrix form as:

with g00 = 1, g01 = 0, g10 = 0 and g11 = 1. But, if we consider the same flat Plane in Polar co-ordinates, then from elementary vector calculus, we know that the distance between two points (r, ) and (r+dr, +d) is: ds2 = dr2 + r2d2 and so, the associated metric is:

and hence grr = 1, gr = gr = 0 and g = r2. There is an overall equivalence under a minus sign. Hence, in both the examples considered above, either g or -g can act as a metric for the system. Now, even for a 2D object (say, a circle), it can have a varying metric g, and still be flat, as we saw in this case (and hence free from gravity). Thus, a varying metric is not a sufficient criteria to model space-time in the presence of gravitational field. In order to know whether a spacetime region is flat or not, and hence determine a true presence of gravitational field, we need the Riemann curvature, which is a (1,3)-tensor, R, defined as:

If the Riemann tensor is zero, then the space-time under consideration is flat, otherwise there is a presence of a genuine gravitational field for non-zero curvature. We now define the relation between a tensor in different co-ordinate systems. Under a general co-ordinate transformation:

A contravariant vector V, transforms as:

a contravariant vector is a special type of tensor, known as the (1,0)-tensor. Similarly, from the definition of a metric g, we see that it is a (0,2)-tensor with transformation rule:

These transformation definitions arise from purely topological considerations of differentials in a manifold, a discussion that is beyond the scope of such a brief note. But for a physical analysis, we can accept these definitions as a rule, without getting too much into its technical details. A generic (p,q)-tensor then has the transformation relation:

And now we can check the previous claim that the affine connection is not a tensor. Under a generic coordinate transformation:

The affine connection, given as

would transform as

which is not in agreement with the general coordinate transformation for a tensor, due to the presence of the second term. It is because of this property that we can redefine the co-variant derivative of a generic tensor, to make it comply with the general coordinate transformation rules. The covariant derivative of a contravariant vector V is defined as:

Similarly, for a covariant vector U, the covariant derivative is:

Thus, for a general (p,q)-tensor, we have the covariant derivative defined as:

We can now see an analogy between General Relativity and Gauge Theory of Electromagnetism 1) In GR, the gravitational field is represented by a (0,2)-tensor g(x) or (2,0)-tensor g(x), while in electromagnetism the field is represented by the (0,1)-tensor A(x). 2) The affine connection here plays the role of defining the covariant derivative in GR, while A(x) defines the covariant derivative in Electromagnetism. 3) The Riemann tensor R defines the curvature and hence the "field strength" of a gravitational field, while the field strength F(x) = A - A, defines the analogous curvature or field strength in Electromagnetism. One final point for this note: g represents a spin-2 field and hence Einstein's Field Equations (EFEs) represent the classical equations of motion for a spin-2 particle. In next note, we will discuss on possible ways to derive Einstein's Field Equations and consider a few elementary examples of different metrics and relativistic effects. Please let us know if you would be interested in this future exposition of the subject.

Neutrinos Observation-How the hell do they work?


Neutrinos are one of the fundamental particles which make up the universe. They are also one of the least understood. Neutrinos are similar to the more familiar electron, with one crucial difference: neutrinos do not carry electric charge. Because neutrinos are electrically neutral, they are not affected by the electromagnetic forces which act on electrons. Neutrinos are affected only by a "weak" sub-atomic force of much shorter range than

electromagnetism, and are therefore able to pass through great distances in matter without being affected by it. If neutrinos have mass, they also interact gravitationally with other massive particles, but gravity is by far the weakest of the four known forces. Neutrinos can travel through the Earth without interacting with a single atom, thus most of them leave no trace. To increase the likelihood of observing the extremely rare interaction of a neutrino with matter, physicists build massive detectors and record the detectable particles that emerge from the rare collisions of neutrinos with atoms inside the detector. Thus trying to understand this elusive particle is expectedly one of the most interesting and active fields of research in particle physics today. A whole host of "Neutrino Detectors" have been/are being commissioned to help us peel through the mysteries of the universe. This leads us to wonder how are scientists detecting neutrinos which are so weakly interacting? How do Neutrino Detectors Work?

The Three Major Families of Detectors


There are essentially three types of detectors, according to the energy or origin of the neutrino we want to detect:

Detectors for solar neutrinos:

Solar neutrinos have an energy between 0 and 20 MeV, depending of the type of solar nuclear reaction they come. Underground, undersea or under the ice, the detectors made for them detect either the Cerenkov light emitted when a neutrino interact with the water (like Kamiokande or Super-Kamiokande ) either the transformation of atoms under neutrino interaction, the remaining atom being radioactive: Chlorine 37 coming from Argon in the Homestake experiment, or Germanium 71 coming from Gallium like inGALLEX experiment.

(The Super KamioKande Experiment Water Tank)

Detectors near nuclear plants:

The anti-neutrinos coming out of nuclear reactors are emitted in great quantity and have a mean energy of 4 MeV. The neutrino detector uses theinverse beta decay reaction ( (anti-neutrino + proton --> neutron + antielectron) to detect anti-neutrinos. It detects the photons emitted when the neutron is absorbed by matter and when the anti-electron coming from the neutrino interaction annihilates with an electron of matter. This detection type was used by Reines et Cowan experiment for the first detection of neutrino in 1956, by BUGEY, by CHOOZ, etc.

(The CHOOZ Nuclear Reactor)

Detectors with neutrino beam:

Nowadays, neutrinos generated by accelerators have an energy of some 10 MeV to some 100 GeV. The detectors in this case identify the particles coming out of the high energy neutrino interaction with a proton, a neutron or an electron of the detector matter. The neutrino beams are produced using a proton beam coming from an accelerator and sent against a Beryllium target, then filtered through a great amount of dense matter (lead, concrete, iron, earth). This detection type was used by the Brookhaven experiment which discovered the nu_mu neutrino in 1962, by CHARM II experiment in 1974, by NOMAD or CHORUS experiments in 1995, etc.

The MicroBooNE Experiment

The Neutrino Theory


Neutrinos are omnipresent in nature such that in just one second, tens of billions of them "pass through every square centimetre of our bodies without us ever noticing." Despite this, they are extremely "difficult to detect" and may originate from events in the universe such as "colliding black holes, gamma ray bursts from exploding stars, and violent events at the cores of distant galaxies". There are three types of neutrinos or what scientists term "flavors": , and , which are named after the type of particle that arises after neutrino collisions; as neutrinos propagate through space, the neutrinos "oscillate between the three available flavors." Neutrinos only have a "smidgen of weight" according to the laws of physics, perhaps less than a "millionth as much as an electron."

electron muon tau neutrinos

The three different Neutrinos

Detection Mechanisms
The detection mechanisms applied b the various detectors is as follows:

Scintillators:

Two scintillation detectors (detectors that can detect photons by illumination) are placed next to the cadmium targets. Antineutrinos with an energy above the threshold of 1.8 MeV will then cause charged current "inverse beta-decay" interactions with the protons in the water, producing positrons and neutrons. The resulting positron annihilation with electrons creates pairs of coincident photons with an energy of about 0.5 MeV each, which could be detected by the two scintillation detectors above and below the target. The neutrons were captured by cadmium nuclei resulting in delayed gamma rays of about 8 MeV that were detected a few microseconds after the photons from a positron annihilation event. This experiment was designed by Cowan and Reines to give a unique signature for antineutrinos, to prove the existence of these particles. Only about 3% of the antineutrinos from a nuclear reactor carry enough energy for the reaction to occur.

The KamLAND Neutrino Detector (Scintillator)

Radio-chemical detection:

Chlorine detectors, based on the method suggested by Bruno Pontecorvo, consist of a tank filled with a chlorine containing fluid such as tetrachloroethylene. A neutrino converts a chlorine-37 atom into one of argon-37 via the charged current interaction. The fluid is periodically purged with helium gas which would remove the argon. The helium is then cooled to separate out the argon, and the argon atoms are counted based on their electron capture radioactive decays. A chlorine detector in the former Homestake Mine near Lead, South Dakota, containing 520 short tons (470 metric tons) of fluid, was the first to detect the solar neutrinos, and made the first measurement of the deficit of electron neutrinos from the sun.

Cherenkov Detectors:

"Ring-imaging" detectors take advantage of the Cherenkov light produced by charged particles moving through a medium faster than the speed of light in that medium. In these detectors, a large volume of clear material (e.g., water or ice) is surrounded by light-sensitive photomultiplier tubes. A charged lepton produced with sufficient energy typically travels faster than the speed of light in the detector medium (though slower than the speed of light in a vacuum). This generates an "optical shockwave" known as Cherenkov radiation which can be

detected by the photomultiplier tubes. The result is a characteristic ring-like pattern of activity on the array of photomultiplier tubes. This pattern can be used to infer direction, energy, and (sometimes) flavor information about the incident neutrino.

Radio Detectors:

The Radio Ice Cerenkov Experiment uses antennas to detect Cerenkov radiation from high-energy neutrinos in Antarctica. The Antarctic Impulse Transient Antenna (ANITA) is a balloon-born device flying over Antarctica and detecting Askaryan radiation produced by ultra-high energy neutrinos interacting with the ice below. The Askaryan effect is kind of a little brother of the Cherenkov Effect. The Askaryan effect is the phenomenon whereby a particle traveling faster than the phase velocity of light in a dense dielectric (such as salt, ice or the lunar regolith) produces a shower of secondary charged particles which contain a charge anisotropy and thus emits a cone of coherent radiation in the radio or microwave part of the electromagnetic spectrum. It is similar to the Cherenkov effect. It is named after Gurgen Askaryan, a Soviet-Armenian physicist who postulated it in 1962. The effect was first observed experimentally in 2000, 38 years after its theoretical prediction. So far the effect has been observed in silica sand, rock salt and ice.

Background Noise
Most neutrino experiments must address the flux of cosmic rays that bombard the Earth's surface. The higher energy (>50 MeV or so) neutrino experiments often cover or surround the primary detector with a "veto" detector which reveals when a cosmic ray passes into the primary detector, allowing the corresponding activity in the primary detector to be ignored ("vetoed"). For lower energy experiments, the cosmic rays are not directly the problem. Instead, the spallation neutrons and radioisotopes produced by the cosmic rays may mimic the desired physics signals. For these experiments, the solution pursued by organisations is to locate the detector deep underground so that the earth above can reduce the cosmic ray rate to tolerable levels.

This 15-foot Bubble Chamber, from the Fermi National Accelerator Laboratory in Batavia, Ill., is a neutrino detector relic of the 1970s, the early days of high-energy particle physics.

IRON-OXIDE AN UNUSAL METAL IN EARTHS CORE!


The crushing pressures and intense temperatures in Earths deep interior squeeze atoms and electrons so closely together that they interact very differently. With depth materials change. New experiments and supercomputer computations discovered that iron oxide undergoes a new kind of transition under deep Earth conditions. Iron oxide, FeO, is a component of the second most abundant mineral at Earths lower mantle, ferropericlase. The finding, published in an upcoming issue of Physical Review Letters, could alter our understanding of deep Earth dynamics and the behavior of the protective magnetic field, which shields our planet from harmful cosmic rays.

Ferropericlase contains both magnesium and iron oxide. To imitate the extreme conditions in the lab, the team studied the electrical conductivity of iron oxide to pressures and temperatures up to 1.4 million times atmospheric pressure and 4000Fon par with conditions at the core-mantle boundary. They also used a new computational method that uses only fundamental physics to model the complex many-body interactions among electrons. The theory and experiments both predict a new kind of metallization in FeO. But, as the team outlined in Physical Review Letters, the metal's structure was surprisingly unchanged. The finding could have implications for our as-yet incomplete understanding of how the Earth's interior gives rise to the planet's magnetic field.

While many transitions are known in materials as they undergo nature's extraordinary pressures and temperatures, such changes in fundamental properties are most often accompanied by a change in structure. These can be the ways that atoms are arranged in a crystal pattern, or even in the arrangement of subatomic particles that surround atomic nuclei.
Compounds typically undergo structural, chemical, electronic, and other changes under these extremes. Contrary to previous thought, the iron oxide went from an insulating (non-electrical conducting) state to become a highly conducting metal at 690,000 atmospheres and 3000F, but without a change to its structure. Previous studies had assumed that metallization in FeO was associated with a change in its crystal structure. This result means that iron oxide can be both an insulator and a metal depending on temperature and pressure conditions.

Pseudogap state in superconductors

Pratap Raychaudhuri

Phase diagram of superconducting NbN films as a function of disorder. At high disorder, Cooper pairs do not disappear at the superconducting transition, Tc, but continue to exist up to a much higher temperature, T*, called the pseudogap temperature. The ability of certain materials to conduct electricity without any resistance was discovered a hundred years ago when Kammerlingh Onnes found that the resistance of solid mercury dropped to zero below 4.2K, its so called transition temperature (Tc). Studies based on this spectacular phenomenon, coined as "superconductivity" , are now a century old, but they continue to thrive both from the point of view of newer applications as well as that of throwing up challenges on the fundamental physics governing the collective behavior of electrons in solids. All superconductors discovered in the first seven decades after Kammerleigh Onnes discovery had low transition temperatures (highest one being Nb3Ge with Tc~23K), well below the liquefaction temperature of nitrogen (77K). The physics behind these conventional superconductors has been well understood over the

past 50 odd years based on Bardeen-Cooper-Schrieffer (BCS) theory. In these materials, an indirect attractive force mediated by vibrations of the crystalline lattice (phonons) bind pairs of electrons with opposing spin into what is called Cooper pairs . Once formed, Cooper pairs collapse into a phase-coherent state that can carry current without any resistance. The binding energy of the Cooper pairs is observed as a gap in the density of states at the Fermi energy called the superconducting energy gap. BCS theory successfully predicts the relationship between transition temperature, superconducting energy gap and the change in transition temperature when an element is substituted by a different isotope of the same element. While the 'holy grail' of superconductivity, namely a material that superconducts at or close to room temperature has not been discovered yet, a major breakthrough happened in 1986 with the discovery of a new class of ceramic materials, all of them containing Copper and Oxygen. Known as High-Temperature superconducting cuprates , several members in this class of materials become superconducting at temperatures well above that of liquid nitrogen (YBa2Cu3O7, Tc~90K; Bi2Sr2CaCu2O8, Tc~85K and HgBa2Ca2Cu3O8 with the highest Tc of 135K), making them useful for diverse applications such as superconducting magnets and levitated trains. However, as far as fundamental physics is concerned, these materials also pose one of the greatest unsolved mysteries in condensed matter physics today. High-temperature cuprate superconductors are completely different from their conventional counterparts. The normal state does not follow the " Fermi liquid theory " as expected in a normal metal. Unlike conventional superconductors, the wave-function describing the Copper pair is highly anisotropic and changes sign when rotated by 90 degree. But the most intriguing feature is the observation of a "pseudogap" state above the transition temperature where the zero resistance is destroyed, but a gap in the electronic density of state, commonly associated with the existence of Cooper pairs, continues to persist up to a much higher temperature. Arguably the hottest debate in this field relates to the nature of the pseudogap, i.e. whether it is persistent superconductivity or whether it arises from some competing magnetic order. A number of recent experiments on these and other related materials have tried to resolve this issue, but the debate continues. Now experiments performed in TIFR on superconducting NbN thin films, show that the "pseudogap" state can also appear in conventional superconductors when the superconductor is made "dirty" enough, by introducing disorder in the form of structural defects in the crystalline lattice. Clean NbN is a conventional superconductor whose properties are well described by the BCS theory. However, the situation becomes different in the dirty-limit when the electronic mean free path approaches the de-Broglie wavelength of the electron. Experiments such as scanning tunneling spectroscopy performed at low temperature using a scanning tunneling microscope reveal that the gap in the electronic density of states

extends up to a temperature, T*, which is well above Tc. The origin of this pseudogap state in NbN can be traced back to phase fluctuations, which are directly observed in the complimentary measurement of magnetic penetration depth . In the pseudogap state between Tc and T* (Fig. 1), the electrons bind into Cooper pairs. However, these Cooper pairs do not condense into a phasecoherent state, due to thermal phase-fluctuations between nanometer-sized domains that spontaneously form in the presence of strong disorder. It is interesting to note that a novel phenomenon once thought to be uniquely associated to the physics of High Temperature superconducting cuprates has ended up enriching our understanding of conventional superconductors. To what extent the same mechanism is also responsible for the pseudogap state in High Temperature superconducting cuprates is currently still open to debate. Further experiments on the role of the underlying magnetic order and the role of disorder that is inevitably present in these doped materials, are needed to conclusively end this debate in cuprates. However, as newer materials are discovered, the older ones are understood deeper, clearing some mysteries while new mysteries continue to unfold throwing up newer challenges for days to come!

An Accelerating Universe, or Light Slowing Down? Astronomers have recently announced that the universe appears to be expanding at an accelerating rate. This is inferred because distant supernovae are unexpectedly dim. This is interpreted as implying that the expansion of the universe is faster now than it was before. This expansion is in turn explained by some mysterious repulsive force that is pushing the universe apart. Another possible explanation is that the speed of light is slowing down. This explanation would avoid the need for such a repulsive force. But in order to understand this explanation, we need to understand more about the red shift. Cosmologists believe that the red shift of light is caused by the expansion of space as light travels from distant objects. This expansion of space causes the light waves to become farther apart, and thus longer, and red shifted. The more time light has been traveling, the

more time there has been for space to expand, and the greater the red shift. Thus more distant objects have a larger red shift. Originally, it was believed that the red shift was proportional to distance (Hubble's law). However, the accelerating expansion of the universe was inferred because distant supernovae are fainter than expected based on their red shift. This means that they are farther away than one would expect based on the linear increase of red shift with distance. So if we let r(d) be the red shift of an object that is distance d away from us, then r(d) increases rapidly for objects near us, but then more slowly for objects farther away. Thus these objects are fainter than they would be if the red shift function r(d) were a constant multiple of d. This is interpreted as evidence that the universe was expanding more slowly in the past, so that the red shift of distant objects is less than one would expect. When the light from distant objects began its journey, the universe, and therefore space, was expanding more slowly. Thus there was a smaller contribution to the red shift than for light that traveled more recently. Nearer objects are less subject to this slowdown, because the universe was expanding faster when light left them. Thus their red shift is comparatively larger, in proportion to their distance from us. Suppose that a distant object is at distance d = d1 + d2 from us. Its red shift is then r1(d1) + r2(d2), where r1(d1) is the contribution to its redshift during the initial transit of light through a distance of d1, and r2(d2) is the contribution to the redshift from the final transit through a distance of d2. We can assume that r1(d1) = c1 * d1 and r2(d2) = c2 * d2, where c1 < c2 because the universe was expanding more slowly in the past. For an object at distance d2 from us, the red shift will be c2 * d2. Hubble's law would give the red shift for the distant object as c2 * (d1 + d2), which is c2 * d1 + c2 * d2. Since c1 < c2, this is larger than the observed red shift for the distant object. Thus we have that distant objects have a smaller red shift than one would expect, based on Hubble's law. In this way, this observation can be explained by an accelerating expansion of the universe. This effect can also be explained by a slowdown in the speed of light. If light were traveling faster originally, then a slowdown would make distant objects appear fainter. The reason these supernovae would appear fainter is that light was traveling faster when it left them. This would make these objects appear farther away than they really are. This would also mean that the light spent less time in transit, so that there would be less time for space to expand, and thus the red shift would be reduced. Since light was traveling faster through the initial distance of d1 than through the final distance of d2, the contribution to the redshift would be proportionally larger from d2 than from d1. Both effects would mean that distant objects would tend to be much dimmer, and apparently farther away, than one would expect based on their redshift according to Hubble's law. This explanation does not assume that the red shift itself is caused by a slowdown in the speed of light, although that is another interesting possibility. We are assuming that when light slows down, its apparent frequency is unchanged. However, other assumptions can also be considered

Das könnte Ihnen auch gefallen