00 positive Bewertungen00 negative Bewertungen

150 Ansichten49 SeitenA semi-serious critical commentary of what science says about the universe, exposing some of the flaws about the current models. The author concludes that the universe is comprised of information, with space and time being essentially forms of information censorship. He backs this up with an example of how nature conspires to prevent us from destroying information. There are several appendices that expand on the ideas presented in the main body of the essay. Written in a somewhat humorous vein, the ideas presented are nonetheless factual, based on the author's understanding of the current state of scientific knowledge. The essay summarizes some key concepts and quotations from Isaac Newton, Albert Einstein, Hermann Minkowski, Arthur Eddington, Niels Bohr, Boris Podolski, Nathan Rosen, Kurt Gödel, John Bell, John Wheeler, Richard Feynman, Claude Shannon, Alan Turing, Benoit Mandelbrot, Erik Verlinde, Leonard Susskind and others.

Jan 25, 2013

© Attribution Non-Commercial (BY-NC)

PDF, TXT oder online auf Scribd lesen

A semi-serious critical commentary of what science says about the universe, exposing some of the flaws about the current models. The author concludes that the universe is comprised of information, with space and time being essentially forms of information censorship. He backs this up with an example of how nature conspires to prevent us from destroying information. There are several appendices that expand on the ideas presented in the main body of the essay. Written in a somewhat humorous vein, the ideas presented are nonetheless factual, based on the author's understanding of the current state of scientific knowledge. The essay summarizes some key concepts and quotations from Isaac Newton, Albert Einstein, Hermann Minkowski, Arthur Eddington, Niels Bohr, Boris Podolski, Nathan Rosen, Kurt Gödel, John Bell, John Wheeler, Richard Feynman, Claude Shannon, Alan Turing, Benoit Mandelbrot, Erik Verlinde, Leonard Susskind and others.

Attribution Non-Commercial (BY-NC)

Als PDF, TXT **herunterladen** oder online auf Scribd lesen

150 Ansichten

00 positive Bewertungen00 negative Bewertungen

A semi-serious critical commentary of what science says about the universe, exposing some of the flaws about the current models. The author concludes that the universe is comprised of information, with space and time being essentially forms of information censorship. He backs this up with an example of how nature conspires to prevent us from destroying information. There are several appendices that expand on the ideas presented in the main body of the essay. Written in a somewhat humorous vein, the ideas presented are nonetheless factual, based on the author's understanding of the current state of scientific knowledge. The essay summarizes some key concepts and quotations from Isaac Newton, Albert Einstein, Hermann Minkowski, Arthur Eddington, Niels Bohr, Boris Podolski, Nathan Rosen, Kurt Gödel, John Bell, John Wheeler, Richard Feynman, Claude Shannon, Alan Turing, Benoit Mandelbrot, Erik Verlinde, Leonard Susskind and others.

Attribution Non-Commercial (BY-NC)

Als PDF, TXT **herunterladen** oder online auf Scribd lesen

Sie sind auf Seite 1von 49

Note to my readers:

You can access and download this essay and my other essays through the Amateur Scientist Essays website under Direct Downloads at the following URL:

You are free to download and share all of my essays without any restrictions, although it would be very nice to credit my work when quoting directly from them.

♫

I'm not a true scientist, but an engineer (retired) by trade.

the guillotine. During the French Revolution, a priest, a drunkard, and an engineer were condemned to be executed by the guillotine. The priest was the first to be put to death. When blade was released it almost reached the bottom, but somehow it got stuck a few inches above the priest's neck. This was taken as a divine augury of the priest's innocence, so he was set free. The same thing happened with the drunkard, and he was also released. Then it was the engineer's turn. He asked to be placed on the guillotine facing upward without a blindfold. The engineer's unusual request was granted, and as the executioner raised the blade for the third time, the engineer shouted, “Hey, I think I see what the problem is!” Engineers aren't known for our political survival skills, but we sure know how to troubleshoot a technical problem.

There's an old story about the engineer and

I'm quite familiar with “pure” science, having learned higher mathematics, physics, chemistry, and many other scientific subjects while pursuing a couple of engineering degrees. One of my favorite hobbies is poring over as many books and articles on relativity, quantum physics, and cosmology as I can get my hands on, and trying to understand how reality “works.” I call it solving the reality riddle. Despite all the progress that science has made, it got stuck only inches away from solving it – just like the figurative guillotine blade. As an engineer, I think I see what the problem is.

Science used to be inextricably linked to philosophy. Isaac Newton, perhaps the greatest scientist and mathematician of all time, spent most of his time working on astrology, alchemy, and Bible prophesy. ^{1} I always considered Albert Einstein to be an even greater philosopher than a scientist. By all accounts, he was just a mediocre mathematician, which undoubtedly hindered his early scientific career. Settling for a job as a patent clerk, he stole time from his daily chores to tackle relativity. I think Einstein's trouble with mathematics brought out his true genius. His famous thought experiments used images rather than mathematical formulas. Using this technique, his fertile imagination enabled him to see beyond the 19 ^{t}^{h} century paradigm that inhibited many of the great scientific minds of that era who were far more mathematically competent than Einstein.

Niels Bohr, the father of the Copenhagen interpretation of quantum physics, was another truly great philosopher/scientist/genius. He and Einstein held very different views on the true nature of reality and they had heated arguments over them, yet they remained close friends throughout their lives. J. Robert Oppenheimer was steeped in math and physics, but he also had a keen interest in philosophy and eastern religion. ^{2} It's too bad there aren't many great philosopher/scientists around today.

Newton showed us that equations describe nature, although nobody knows why nature should follow mathematical rules. James Clerk Maxwell gave us beautiful mathematical equations that revealed a fundamental truth about nature that led Einstein to invent special relativity. Beginning in the late 19 ^{t}^{h} century, physics became increasingly dominated by mathematics. Many of today's physicists seem to be primarily mathematicians with scientific leanings. I think that is the crux of the problem. I have nothing against math personally; I used it all the time in my work. I can appreciate the seductive appeal of its abstract beauty, but scientists shouldn't follow the mathematicians blindly, especially when equations lead in the wrong direction or in too many directions at the same time.

String theory is currently leading science in some of those directions. I admit I understand nothing about string theory beyond what I've read in the popular literature. I've read that string theory is extremely difficult to master, but it has produced the most beautiful mathematics ever. I don't doubt that

1 He was interested in knowing when the world would end. He worked out the math, and it told him the End would come in 2060. But he was a rational man who hedged his bets, so he said it might also come after 2060.

2 Oppenheimer could actually read Sanskrit. He said he recalled lines from the Bhagavad Gita while watching the first atomic fireball rise over the desert near Alamogordo, NM. Enrico Fermi was too preoccupied with timing the arrival of the shockwave and calculating the kilotonage of the blast to think about Hindu scripture. He wan't much into it anyway.

1

for an instant, but shouldn't string theorists have come up with at least one testable theory after 30 years of hard work? I'll return to this topic later on.

Peeling away the layers of the problem, we see that modern physics embraces dual theories: general relativity and quantum mechanics. General relativity is a classical theory in the tradition of Newton and Maxwell with the underlying premise that every phenomenon in the universe has a cause that is explained by universal laws expressed in equations. It conforms to the 19 ^{t}^{h} century belief that the universe is like a giant machine that works in the same predictable manner as a clock or a steam engine. Everything that has happened, is happening, or ever will happen is completely determined by the initial state of the machine. When Albert Einstein proposed relativity, other scientists who were mired in 19 ^{t}^{h} century orthodoxy thought he was a radical. He wasn't; he pretty much stuck with Newton's classical universe, except that he merged space and time into a space-time continuum. Everything else remained smooth, predictable, and infinitely divisible into smaller parts just like in Newton's world.

I think those underlying premises are completely wrong. Quantum physics agrees with me: everything is jagged, unpredictable, and lumpy. It's almost as if God refuses to do calculus and prefers to count and roll dice instead. Why place money on quantum physics? Because when the classical view opposes the quantum view, the quantum view is always proven right by experiment. A bit more of that later.

Now I don't argue that most predictions based on relativity have proven to be accurate in the everyday world. Time dilation and E = mc ^{2} , predicted by special relativity, are proven facts. GPS navigation is only possible by allowing for time dilation and the fact that gravity slows clocks as predicted by general relativity. Other phenomenon such as gravitational lensing of light and the precession of Mercury's orbit closely match Einstein's own predictions. Cosmologists use general relativity to model the birth and evolution of the universe; however, I have this nagging suspicion that those models aren't quite right. Infinities and violations of causation are some of the problems I see with them.

Other cracks are beginning to show in the relativistic edifice as well. Take for example the fact that galaxies seem to rotate significantly faster than general relativity predicts. Cosmologists have filled in that crack with dark matter. The problem is that dark matter can't be explained in the current version of the standard model based on quantum field theory. Also, even after adding dark matter to our galaxy, there is another gravitational anomaly much closer to home: the so-called Pioneer anomaly.

NASA launched the Pioneer 10 and 11 interplanetary space probes in 1972 and 1973 to explore the outer planets of the solar system. By now, these probes are in deep space well beyond the orbit of Pluto, still moving away from the sun, and still sending radio signals back to earth. Analysis of these signals shows that the rates of deceleration from the sun for both spacecraft exceed the deceleration calculated from the current theory of gravity by 10 ^{-}^{9} meter/sec ^{2} . The Ulysses and Galileo spacecraft show similar anomalies. Can astrophysicists fix this anomaly by adding a halo of dark matter around the sun, or is this another indication that the current theories of gravity need to be revisited and maybe even revised?

Then there is the problem with the rate of expansion of the universe. Solutions to the field equations of general relativity produce dynamic universes that expand or contract. Expanding universes will either stop expanding then start to contract or they will expand forever; the rates of expansion decrease in either case. Astronomers have found that the rate of expansion is increasing lately, which contradicts theory. To remove this contradiction without abandoning general relativity, cosmologists invented dark energy, a kind of anti-gravity force that pushes space-time apart. The problem with dark energy is that it confronts quantum physicists with a new form of energy requiring the addition of a new force carrier particle to an already overcrowded table of fundamental particles. Along with dark matter, it means they will have to revamp quantum field theory. I'm not saying they won't pull that off, but it seems to indicate that something is amiss.

Finally, there is the problem of time travel. If relativity teaches us anything, it's that the principle of causation is inviolate. In fact, it's a main premise underlying special relativity, otherwise the equations

2

are meaningless. Unfortunately, some of the solutions of the general relativity field equations allow backward time travel. The avant garde mathematician Kurt Gödel created an entire universe based on general relativity that violates causality. Various hypothetical time machines have been constructed on paper by imagining extremely long and massive cylinders that rotate rapidly in 4-dimensional Einsteinian space-time. Now I'm fairly open-minded when it comes to time travel (it provides wonderfully entertaining plot lines); however, the engineer within me sees a problem when a theory makes predictions that violate one of its own fundamental principles. I think I can see what the problem is: when space and time are combined into a continuum, gravity can twist it to such an extent that space and time are interchanged. This makes for great sci-fi movies, but I'm afraid it creates some serious paradoxes. I suspect backward time travel also violates the second law of thermodynamics (see Appendix D). Maybe liberating time from its space-time prison would avoid these paradoxes.

Now let's turn back to the conflict between quantum mechanics and relativity. Over decades, Albert Einstein and Niels Bohr engaged in a friendly sparring match about the true nature of reality. Einstein never denied that quantum effects are real, but he remaining true to his classical roots and never accepted Bohr's belief in indeterminacy. Einstein remained ever faithful to his clockwork universe, and assumed that events have hidden causes even when they appear random. He thought a universe that violates causation is absurd, and he kept challenging Bohr with clever and inventive thought experiments that attempted to prove the falsehood of the Copenhagen interpretation.

Einstein, along with two Princeton University colleagues, Boris Podolsky, and Nathan Rosen, presented one of his most clever thought experiments in a paper published in 1935. That paper came to be known as the EPR paradox, using the authors' initials. The paper said that quantum mechanics is “incomplete” because without hidden variables it could not satisfactorily explain (at least in the minds of the authors) certain changes that occur simultaneously in two remote systems that are quantum-mechanically entangled. Einstein, Podolsky, and Rosen didn't explain how the hidden variables operate, but they insisted they are the only way to avoid violating causation.

Following the publication of the EPR paradox, there was a collective shoulder shrug by quantum physicists. To borrow a phrase from modern software engineers, quantum physics “just works” so why bother explaining why? They just continued with their research as if nothing had happened. Bohr sort of defended himself by publishing a response to the EPR paper, offering what some might call a hand- waving argument using the Heisenberg uncertainty principle. At any rate, nobody could envision an experiment that would prove whether Einstein or Bohr were right. Until 1964 that is.

John Bell, a brilliant Irish mathematician/philosopher/physicist relooked at the EPR paradox and came up with an elegant, profound, yet simple type of experiment that would show whether the idea of hidden variables holds water. If hidden variables do in any sense exist, then results from those experiments

would obey statistical inequalities, called Bell's inequalities. If Bell's inequalities are violated, then the idea of hidden variables is false. His proof was published in 1964, but there was no technology available to carry out the kinds of experiments proposed in his paper. Science had to wait for technology to catch up, and Alain Aspect and others were finally able to carry them out in the 1980s. Their results did violate Bell's inequalities, thus proving EPR were wrong and validating Bohr's

In fact, the evidence was so overwhelming that if Einstein were still alive

in the 1980s, he surely would have conceded that Bohr was right all along.

Copenhagen interpretation. ^{3}

The point I'm trying to make with this long-winded historical detour is this: if scientists want to merge quantum physics with general relativity, they should build a new theory on quantum principles, and not the other way around. Quantum mechanics really does describe how the universe works, even if it seems counter intuitive or it violates classical principles. Quantum physicists should give up trying to duplicate Einstein's field equations from quantum mechanics. By by that I mean they should stop using a 4-dimensional space-time continuum as the starting point.

3 This topic is covered in much more detail in Appendix A.

3

As an electrical engineer, I totally get why Einstein introduced the 4-dimensional space-time continuum, a.k.a. Minkowski space, in special relativity. It was a handy mathematical device that provided a convenient way to work out the “currency exchange rate” between units of space and units of time. Assigning real numbers to three dimensions of space and imaginary numbers to time, and combining all of these numbers into a single entity called space-time makes the mathematics clean, elegant, and beautiful. It also makes the important formula E = mc ^{2} emerge naturally, which is kind of cool. So it's not hard for me to understand why Einstein fell in love with Minkowksi space and why others were seduced by it also. However, it's a mistake to conflate an elegant mathematical technique that works for a special case with reality itself, and then apply that technique to everything.

Engineers use similar mathematical devices we know aren't real; we use them simply because they work. Charles Steinmetz was the first person to represent voltages and electrical currents as complex numbers. No sane engineer actually believes that voltages and currents have real and imaginary parts – it's just math. We use Steinmetz's methods because they're easy to work with; the imaginary parts of the complex numbers take care of the phase angles of sine functions in a natural way that just rolls right through the calculations. Doing analysis of AC circuits would be extremely difficult, if not impossible, without using complex numbers. But whenever there's a case that can't be solved that way, I forget the whole idea of complex voltages and currents in a heartbeat and use a different method instead.

The question Einstein et al proposed in the title of their EPR paper was, “Can [a] quantum-mechanical description of physical reality be considered complete?” It would also be fair to ask the question, “Can a relativistic description of reality using Minkowski space be considered complete?”

Quantum mechanics also starts out with that framework. In fact, the famous Feynman diagrams, used to explain particle interactions, show particles zooming along “world lines” in an abbreviated 2- dimensional version of Einstein's space-time continuum. Why do quantum physicists keep using Einstein's model of space with time woven into it, filled with fields that are smooth, continuous, and infinitely divisible into smaller parts? It's very clear that reality is lumpy and discontinuous.

You are probably asking how an engineer with only an elementary grasp of general relativity would have the audacity to question the validity of general relativity, when every renowned physicist for the past 100 years says it is simply beyond question? Richard Feynman, who was no slouch when it came to creating theories, once said, “It doesn't matter how beautiful your theory is, it doesn't matter how smart you are. If it doesn't agree with experiment, it's wrong.” I'm not saying relativity is wrong exactly; it just isn't complete. It's not obvious to most physicist that general relativity is incomplete because it works for them most of the time. But Newton's theory of gravity also works in very many cases, and yet everyone agrees it's incomplete. Okay, so when doesn't relativity work? Well, I cited a couple of examples earlier where gravity is stronger over large distances than it ought to be and where the expansion of the universe is a bit quirky. Sure you can plug dark matter and dark energy into relativity and make it work, but doing that just seems too ad hoc for my taste.

String theory was an attempt by physicists to start over with a clean sheet of paper, realizing there are irreconcilable differences between general relativity and quantum mechanics. Furthermore, although quantum field theory and the standard model make successful predictions, they have too many arbitrary constants that can't be explained from first principles. What was needed was a whole new set of first principles where both quantum mechanics and gravity would emerge together from the theory. This was a very sensible approach and I applaud it. Unfortunately, it think string theory made some faulty assumptions from the get go by repeating some of Einstein's mistakes about the nature of time and space. Instead of just three spatial dimensions represented by real numbers with time pointing in the imaginary direction, string theory uses nine spatial dimensions with time pointing in the imaginary direction. Why a total of 10 dimensions? Well, just because it makes the math work. Okay, I get the math part, but why do we keep insisting that space and time are smooth and continuous instead of jagged and lumpy, and why must we always merge space and time into a continuum? If time and space

4

are really separate things, maybe it's wrong to keep modeling them as a continuum. Furthermore, unless I'm way off target, string theory has hidden variables, and Bell's EPR experiments proved beyond a doubt that there are no hidden variables in the real universe.

Another problem is that string theory is infinitely more complicated than the existing theories. That would be okay if it produced new results that aren't found in existing theories. Unfortunately it hasn't. Something can be factually true and beautiful without being useful; engineers call such a thing “a solution in search of a problem.” String theory may be 100% mathematically consistent and full of beauty and elegance, but I'm afraid it's leading us down a rabbit hole. Or many rabbit holes to be exact. It seems there are at least 10 ^{5}^{0}^{0} different string theories, each one describing a different reality. ^{4}

With general relativistic being fundamentally flawed because it's deterministic, with quantum field theory having too many arbitrary constants that can't be explained from first principles, and with string theorists seemingly chasing tangents, where does science go for answers? Maybe we should put all the complicated math aside for a while and just look at nature. I mean really look at it. The answer to the reality riddle may be hiding in plain sight – everywhere.

The late Benoit Mandelbrot was the father of Mandelbrot sets, commonly known as fractals. Using very simple mathematical formulas that feed output variables back in as new inputs over and over again, some amazingly complex, beautiful, and bizarre structures emerge seemingly out of nowhere. The functions that generate fractals not complicated: a basic one is z _{n}_{e}_{w} = z ^{2} _{o}_{l}_{d} + c where z is a complex variable and c is a complex constant. When the real and imaginary parts of the z-values in the set are plotted as points on an x-y plane, they form 2-dimensional fractal patterns with structure. Solid 3-D fractal shapes, called Mandelboxes and Mandelbulbs, can also be generated from simple functions that employ feedback. ^{5} However, you don't need complex numbers to generate fractals.

Fractal shapes are much more than quirky blobs. Fractal objects seem to be ubiquitous in nature, such as breaking ocean waves, veins in a leaf, heads of broccoli, and galaxy disks. In fact, many of the 3-D “special effects” seen in modern animated movies employ computer-generated fractals that look exactly like landscapes, fires, oceans, and other natural objects. Are the similarities between fractal structures and things seen everywhere in nature mere coincidences, or is there some deeper truth behind them?

John Wheeler took Bohr's interpretation of quantum theory to a whole new level, stating that nothing in the universe even exists until it is observed: we live in a universe created by consensus among observers. ^{6} I'm not sure I'd go as far as Wheeler, but it's obvious that there isn't much matter to be found in the material universe. Most of what we humans consider “solid” – including atoms – consists almost entirely of empty space. What little “matter” can be found in all that empty space seems to consist of quantum states, which are bits of information. Everything that fills the universe, all the fundamental particles and force carriers, the atoms, molecules, and so forth up the chain, are made out of information. But there's more: empty space isn't really empty. A vacuum is filled with “virtual” particles that constantly enter and exit reality. We know those “virtual” particles are present because

4 That's unimaginably many theories; it's equal to a googol raised to the 5 ^{t}^{h} power in case you're interested. In fact, that number seems pretty much like infinity to me and here's how I finally got my head around it: Suppose computer engineers design a supercomputer that can examine a trillion string theories each second to find the right one. If the computer runs 24/7 for a trillion years, the computer could examine around 3.15 ×10 ^{3}^{1} string theories. Now that's also a pretty big number. How big? Imagine owning a swimming pool 15 meters wide by 30 meters long by 2 meters deep – smaller than Olympic-size, but still pretty nice. The number of string theories the computer examined would be more than the number of water molecules in your swimming pool. Now, if you take the original number of string theories and subtract that very large number of string theories examined by the supercomputer over a trillion years, you still end up with almost 10 ^{5}^{0}^{0} more theories left to examine. The computer didn't even make a tiny dent in the total. Infinity minus something huge equals infinity. That's why I'm not putting my money on anyone finding the right string theory.

5 The Internet is replete with beautiful computer-generated animations of Mandelbrot solids. You can Google them using the keywords “mandelbulb video” or you can download some free Mandlebulb 3D graphics software on your computer and create your own animations.

6 Human beings get most of our information through our eyes. Dogs get most of their information through their noses, which makes me wonder what a doggie universe created by canine consensus is like.

5

they exert pressures on metal plates suspended in a vacuum that can be measured experimentally. These “virtual” particles also must have quantum states, so even empty space is filled with information. Alternatively, we could switch that around and say the “virtual” particles carry quantum information about empty space itself. In other words, we live in a dataverse. ^{7}

Data processing and computations are performed by combining bits of information according to certain rules to produce new bits of information. Alan Turing came up with a concept of a universal computer in 1936 that did nothing more than read bits of information from a strip of paper moving back and forth, replacing the bits on the strip with new bits by following a set of logical rules. A Turing machine could perform any task that a modern digital computer is capable of. Quantum interactions do essentially the same thing: interacting particles exchange quantum information about themselves, reading bits and writing new ones. If quantum interactions are equivalent to data processing on a microscopic level, perhaps these processes also generate Mandelbrot sets, either by accident or by design, applying the basic “and”, “or” and “not” logical operations on the quantum states themselves. Maybe our universe is just the totality of countless Mandelbrot-set-producing processes. In fact, maybe our entire universe actually is a Mandelbrot set.

If all of this sounds crazy, consider some of the properties of Mandelbrot sets. Fractals have a property called self-similarity, meaning that pattern of the whole is repeated by similar patterns everywhere, over and over at ever smaller scales. Most physicists are coming around to the idea that the universe is essentially non-local where everything is interconnected, and the self-similarity property also results in interconnections among all parts of the set. Mandelbrot sets also have characteristically jagged and discontinuous features extending down to the smallest imaginable scales, echoing the lumpiness found in the quantum world and throughout the universe at large.

Finally, fractal geometry has dimensionality with peculiar properties. The Hausdorff dimensions of common geometric objects are integers: 0 for a point, 1 for a line, 2 for a triangle, 3 for a sphere. The Hausdorff dimension of a Euclidean space equals the number of orthogonal directions in that space. On the other hand, fractals have non-integer dimensions that are smaller than the dimensions of the Euclidean spaces they fit into. For example, a Sierpinski triangle is a fractal that fits into a 2-dimensinal plane, and its Hausdorff dimension is approximately 1.58. Fractal solids, like a Mandelbub, have dimensions between 2 and 3. We know that over very large distances gravity seems to deviate from both Newton's inverse square law and solutions to general relativity's field equations. Physicists use dark matter to explain away that discrepancy, but suppose space itself does't have fixed Euclidean-type dimensions. Instead, suppose space itself is a Mandelbrot set with Hausdorff dimensions that approach 3 at small scales and decrease over very large scales. If we picture gravity as a classic gravitational field ^{8} , the field should be more confined in a lower-dimension space than it would in three dimensions. Motion and inertia would also deviate from classical laws in such a space. Could that explain those discrepancies astronomers are seeing, as well as the Pioneer anomaly? Variable dimensionality would even affect the way the universe at very great distances is perceived through telescopes.

This essay has been very much a work in progress. The first version was six pages long, plus a couple of appendices. The main essay abruptly ended on Page Six because I simply ran out of clever things to say. Since then, I've published several more versions, adding several more appendices on some related topics, and expanding the entire essay to about 27 pages. After thinking about the reality riddle some more, it occurred to me that I needed to fix the abrupt ending and wrap things up with a few more concluding remarks. I hope these remarks, which follow here, will finalize what I needed to say.

Looking over the landscape of different “realities” that science has postulated, it seems to me that all of them are really just models that map data that Nature presents to us through our senses and instruments

7 Appendix B discusses Information, Entropy, and Meaning from the standpoint of modern information theory. It needs to be pointed out that according to thermodynamics, entropy always increases. Structure, meaning, and intelligence won't emerge spontaneously on their own unless some organizing Process or Principle is causing them to emerge.

8 This is meant only as a conceptual visualization. I don't know whether or not gravity has any actual fields.

6

into alternative spaces. Engineers use remapping techniques all the time. For example, when analyzing the normal and shear stresses in I-beams and other structures for computing strains, civil engineers map these stresses into a 2-dimensional space known as Mohr's circle, named after its inventor, Christian Otto Mohr. Data are mapped into a fictional space represented by Mohr's circle, and the analysis is done there. The analytical results – data – are mapped back into the space of the physical structure.

Electrical engineers use similar techniques. Ideally, 3-phase electric power networks should be balanced. When circuits are unbalanced, which is often the case, they are very difficult to analyze. Edith Clarke came up with a way, called symmetrical components, to decompose an unbalanced, physical 3-phase circuit into three idealized, perfectly-balanced 3-phase circuits, where the analysis can be carried out much more easily. The analytical results are then mapped back into the unbalanced physical circuit to obtain useful values. The extra steps of mapping and remapping the data back again are well worth the effort, because it greatly reduces the overall amount of computational work.

Science is confronted with time and simultaneity issues because of the constraints imposed by causation and the finite speed of light. In response, Albert Einstein proposed a method to map data into an alternative 4-dimensional space where he could work around those constraints and make calculations. This is known as the theory of relativity. Confronted with the problem of uncertainty, science invented quantum physics, which has taken on various forms: the wave function, invented by Erwin Schrödinger, and quantum field theory, developed by Richard Feynman and others. All of these techniques have survived into the 21 ^{s}^{t} century because they have been largely successful – within prescribed limits. However, there is a consensus among physicists that none of the existing theories are complete and none are able to describe reality in total. That's why there are so many ongoing efforts to solve the reality riddle by finding the one theory, the one model, the one technique that can do it all.

Scientists have become very inventive lately. In his book The End of Time, Julian Barbour explores an alternative space known as Platonia, where all possible configurations of the universe are mapped as single points. Platonia has countless dimensions, and each point in this space represents an entire universe. The one feature of Platonia that makes it unique is that time does not exist in it. Barbour says our entire reality – our entire past, present and future – are mapped as single points in Platonia. In fact he says every possible past, present, and future are mapped as single points. The engineer in me sees nothing wrong with this, as long as we view it as just an alternative way of arranging the data that are presented to us in our reality. But is Platonia reality, or just an alternative way of thinking about it?

The same thing could be said of string theories or any of the other current models of reality. As long as these theories can consistently map data from “our world” into an alternative space and back again, all of them might be equally valid. ^{9} But do any of them actually solve the reality riddle? And if some data are fundamentally unknowable to us because of uncertainty, can the riddle be solved at all?

Some of the current candidate theories may be plausible but they cannot be proved or disproved by experiment or observation. Included in this group are Platonia, the Many Worlds Theory, the Eternal Inflation Theory, and the Anthropic Principle. Any of them might explain why things are the way they are, but none of them yields to the scientific method. Maybe science alone can't provide an answer.

I've included some appendices at the end of this essay about some topics I find interesting. I hope you'll find them interesting also.

9 Data mapping seems to be the common feature among all theories. In the final analysis, I think we'll discover that our “reality” consists of only data. The dataverse concept – Wheeler's “it from bit” – has received harsh criticism from certain parties who don't consider information as “real” because it lacks material substance. However, Schrödinger's wave function, ψ, also lacks material substance, and yet quantum physicists consider ψ to be very real.

7

Appendix A – The EPR Paradox and Bell's Inequality

Bell's inequality is an amazingly powerful, elegant, and yet simple theorem. It was published by John Bell in 1964 as a belated response to the Einstein-Podolsky-Rosen (EPR) paper published in 1935. According to EPR theory, quantum mechanics is deterministic, even if we're unable to ascertain the mechanisms that make it so, at least for the time being. Hidden variables are cause-and-effect mechanisms we don't see that keep the universal clockwork running. Bell's theorem proposed experiments that would disprove the existence of hidden variables if certain statistical inequalities are violated. There are several ways that such an experiment can be set up. I chose one that's different than Bell's original idea, but it's simple to understand and doesn't require much math in order to work out the results. Here it is:

Pairs of entangled photons are created and are sent in opposite directions toward separate laboratories where Alice and Bob ^{1}^{0} perform experiments on the photons as they arrive. Their two labs are many feet apart, and no communication is allowed to take place between them while the experiments are performed. The photon pairs are initially unpolarized, meaning that each photon has a 50% chance of passing through a polarizing filter oriented at an arbitrary angle. If a photon passes through, it becomes polarized at that angle. Moreover, if either of the entangled photons passes through a filter, both it and its entangled partner will be polarized at the same filter angle. It is the instantaneous polarization of both photons by one filter that would have given Einstein heartburn; this instantaneous action at a distance was at the heart of the EPR paradox.

Now immediately before the arrival of each of their entangled photon, Alice and Bob both randomly select one of three polarizing filters. Filter 1 is oriented at 0°, Filter 2 is oriented at 120°, and Filter 3 is oriented at 240°. There are photon detectors behind Alice's and Bob's filters that record whether or not photons pass through their filters. Each time a photon is scheduled to arrive, Alice and Bob randomly choose a filter and record which filter was chosen and whether or not a photon is detected. After doing this billions of times, Alice and Bob compare their notes of each experiment and count how many times they are in agreement; i.e., when both of them observed a photon or neither of them observed one in a given pair.

The experiments where Alice and Bob chose the same filters will always agree, so those results contain no information and they are discarded. Only those experiments where different filters were chosen are compared. Bell's inequality proves that if different filters were chosen and hidden variables directed the photons to go or not go through the filters, Alice and Bob will be in agreement for at least ⅓ of the pairs of entangled photons. However, if there are no hidden variables, then there will be agreement for ¼ of the pairs of entangled photons.

It's easy to show why there is ¼ agreement probability if there are no hidden variables. Suppose Alice randomly selects Filter 1 and her photon is the first one measured. It has a 50% chance of passing through the filter. If it does, it becomes polarized at 0°, and immediately so does its partner over in Bob's lab. Now if Bob chooses Filter 2, his 0° photon as a 25% chance of passing through that filter. On the other hand, if Alice's photon doesn't pass through her filter, then it becomes polarized at 90°, and immediately so does its partner over in Bob's lab. Now Bob's 90° photon has a 75% chance of passing through his Filter 2. Therefore, Bob's photon has a 25% chance of not passing through the filter. In order for Bob and Alice to agree, either both photons pass through their respective filters, or neither of them do. Whether Alice's photon passes through her filter or not, Alice's and Bob's results will agree in 25% of all cases. You can repeat the calculation by assuming Bob's photon arrives first, or use any combination of filters, and the result will still be 25%.

It's slightly harder to show that hidden variables increase the probability of agreement above ⅓. In

10 Alice and Bob are undoubtedly the most famous pair of experimental scientists in the world. They perform almost all of the experiments requiring teamwork that are found in the Scientific literature.

8

essence, there are a total of eight possible ways that hidden variables can predetermine whether a photon will go through Filters 1, 2, and 3. Let a plus sign represent a hidden instruction to go through a filter, and a minus sign represent a hidden instruction not to go through a filter. Arranging the plus and minus signs in the order of Filter 1, 2, and 3, there are eight possible sets of hidden variables: (+++), (++-), (+-+), (+- -), (- ++), (- + -), (- - +) and (- - -). Now look at each set. There are three pairs of hidden variables in each set that show how photons behave when Alice and Bob use different filters. The first set is comprised of three three pairs that are all alike (+ +) meaning Alice and Bob will be in 100% agreement if their photons are imprinted with that set of hidden variables. The last set is also comprised of three pairs that are all alike (- -) meaning that there's 100% agreement with that set too. Each of the other six sets of hidden variables has one pair out of three that is either (+ +) or (- -), so agreement will occur ⅓ of the time with those sets. So the minimum rate of agreement among all eight sets is ⅓.

It's important to remember that according to EPR, the same set of hidden variables must be imprinted on both photons of each photon pair because both photons must do exactly the same things when they are tested with the same filters. We don't know for certain which set of hidden variables is imprinted on any given pair – they're hidden after all – but no matter which set is actually imprinted on them, Alice's and Bob's experiments will agree at least ⅓ of the time. Using Bell's air-tight chain of logic, if Alice's and Bob's results agree less than ⅓ of the time, then the EPR hidden variable theory is false.

Fortunately, the gap between a ¼ agreement rate and a ⅓ agreement rate is large enough to provide an unambiguous verdict if the experiment is carried out properly and it shows a violation of the inequality. Yet as simple as the experiment is in principle, it is technically very difficult to carry out. Pairs of entangled photons must be generated with a high level of reliably. If any pairs of photons or individual photons go missing, or photons pairs become disentangled in transit to Alice or Bob, this will skew the statistical results. The polarizing filters must be nearly 100% efficient, and the random filter selections in Alice's and Bob's labs must be exquisitely timed and synchronized with the arrival of each photon. Finally, both experiments must start and finish in less time than it would take a signal to travel between Alice's and Bob's labs at light speed. The last requirement makes sure that no hidden communication at or below light speed can take place between the photons.

The technology to fulfill Bell's experimental requirements did not exist in 1964, but sensitive EPR experiments were performed years later. Alain Aspect accomplished this feat in the 1980s using polarized light. His statistical results violated Bell's inequality with a high degree of certainty, proving conclusively that no hidden variables exist. This was the final nail in the coffin of the classical- deterministic interpretation of quantum mechanics outlined in the EPR paper. Since the 1980s, less rigorous EPR demonstrations using polarized light have been carried out in college undergraduate physics labs that show effects similar to the full-blown experiments.

Looking back at the EPR paper, I think Niels Bohr could have easily dismissed Einstein's paradox by pointing out that the collapse of a quantum wave function over extended distances does not involve communication of any kind, and therefore it cannot violate causation. Modern communication theory wasn't invented until the late 1940s, so neither Einstein nor Bohr could have realized in 1935 that communication requires an exchange of information. Since Particle A and Particle B are in the same quantum state, there is literally no information that Particle A could communicate about its own quantum state to Particle B or vice versa, so no communication can take place between them. (This is why engineers can't exploit the EPR paradox to create an intergalactic instantaneous communication system. Darn.)

Richard Feynman said, "If you think you understand quantum mechanics, you don't understand quantum mechanics." Einstein thought he understood quantum mechanics from a classical- deterministic viewpoint. Clearly he didn't understand it completely. I don't know if Bohr understood it either, but he probably came a lot closer than Einstein did.

9

Appendix B – Entropy, Information, and Meaning

Prior to Claude Shannon's work in the 1940s on the subject, there wasn't a good definition of what information is. People had a fairly good intuitive idea of what it means to communicate, but there wasn't any formal language to express it. It turns out that there is a connection between information and entropy, which was a concept that was fairly well understood from thermodynamics. Statistical thermodynamics defines the entropy of a system as equal to the Boltzmann constant times the natural logarithm of the total number of microstates of the system. Let's see how that relates to information.

Suppose I flip a trick two-headed coin, and I inform you that heads came up. How much information did I convey? According to information theory, the answer is 0. You already knew the coin would come up heads without me even telling you, because it's a trick coin. Now suppose I flip an ordinary coin, and I inform you that heads came up. According to information theory, I conveyed exactly one bit of information. This is because the number of bits of information I gave you concerning the state of the coin is equal to the base 2 logarithm of the number of possible ways the coin might have landed. Since there are two possible ways the coin might have landed, the number of bits equals the base 2 logarithm of 2, which is equal to one bit.

Now suppose I hold a deck of 52 cards that is thoroughly shuffled. I draw a card from the top of the deck and inform you that it's the deuce of clubs. How many bits of information did I convey to you? Well, I didn't tell you everything about the deck, only what the first card was. There are 52 possible cards I could have drawn, so the amount of information is the base 2 logarithm of 52, or log _{2} 52, which happens to be somewhere between 5 and 6 bits (5.7 bits to be more precise).

Next, suppose I keep drawing cards off the deck and tell you what each card is. There are 52! possible ways a shuffled deck could be arranged. This is an enormous number of ways, so giving you precise information about the order of the deck conveys a lot more information than only telling you what the first card is. The amount of information I gave you is log _{2} 52!. If you don't like dealing with factorials, it can be computed as log _{2} 52 + log _{2} 51 + log _{2} 50 + … + log _{2} 1 = 225.581 bits, which is about the same amount of information that's conveyed by the results of 226 coin tosses. (If you want to know how many ways the deck can be arranged, you can work backwards from the number of bits. It equals 2 ^{2}^{2}^{5}^{.}^{5}^{8}^{1} = 8.0658 × 10 ^{6}^{7} ways.)

Finally, suppose I purchase a new deck of cards with all of the cards in ascending order. I draw all the cards from the deuce of clubs through the ace of spades and tell you which cards I draw. I think you'll know by now that me telling you the precise order of the cards conveys 0 bits of information because you know that every new deck of cards is arranged exactly the same way. Only after shuffling the cards will any information be contained in that deck. So increasing entropy also increases information.

To put the icing on the cake, Shannon had to account for the fact that information content is dependent not only on the total number of states, but their probabilities. When we send a message in English, the letter “e” contains less information than the letter “z” because “e” occurs much more often in English. Suppose each of the 26 letters in the alphabet have a probability of occurring equal to p _{n} . Shannon defines the information of the n ^{t}^{h} letter as equal to – (p _{n} log _{2} p _{n} ) bits (note the minus sign in front of that expression).

This is the gist of information theory and the connection with entropy. Shannon stated flatly that information equals entropy. As a side note, the second law of thermodynamics states that entropy of an isolated system can never decrease over time. This suggests that the total amount of information in the universe is increasing over time and a universe constructed from information would have to expand. ^{1}^{1}

11 According to the holographic universe model, the total amount information contained within a volume is limited by the number of bits that can be encoded on the surface surrounding the volume, which is equal to ¼ of the number of Planck areas on that surface, where a Planck area is equal to 2.6 × 10 ^{-}^{6}^{6} cm ^{2} . Conversely, the amount of information about the universe that an observer can receive is limited by the surface area through which the information passes. The human

10

This seems very neat and tidy, but it isn't complete. Quantum states fit into Shannon's definition of information perfectly because they're so simple; binary states like spin up versus spin down, positive charge versus negative charge, etc. The second law of thermodynamics should quickly drive a quantum universe into chaos. How did structure and meaning arise in such a place?

We need to examine the concept of meaning a little further. For example, you may be tempted to compute the information contained in a message expressed in English by simply adding up the bits contained in each letter of the message. That would take into account the probabilities of the individual letters, but not the order in which they appear. The sentence “the cow jumped over the moon” is more meaningful than the string of characters “ehw ejo toprhv emtd eon coum” although both sets contain the same number of bits according to Shannon's definition. So it appears that actual words have much more inherent meaning than the individual letters that make up the words, and word order and context are more meaningful still. Obviously, a Shakespeare sonnet has more meaning than the taking the words from that sonnet and jumbling them up.

If a sentence in English is translated into Chinese with precisely the same meaning, will both sentences convey the same amount of information? According to Shannon's definition of information, they probably don't because Chinese characters have different probabilities than English words. Clearly then, information and meaning are two different things. Even with a solid mathematical definition of information in hand, a similar statistical definition of meaning seems to be quite elusive. How can meaning emerge in a universe built upon simple quantum states?

Here's where Mandelbrot sets may play a role. Fractals have complex patterns that mysteriously emerge from very simple mathematical expressions. If meaning can be ascribed to fractals, it comes from those patterns. A universe as a Mandelbrot set with meaning could be generated from elementary quantum states that have information without any more meaning than individual letters in a random character string. In an English sentence, it's the pattern of those letters that give it meaning. Likewise, pattern, structure, and meaning in a Mandelbrot universe could emerge from what seems to be quantum uncertainty and chaos.

Of course, Mandelbrot sets aren't the only way this could come about. There is an entire class of systems that display self-organized criticality (SOC) that has been studied extensively by Per Bak, Chao Tang, and Kurt Wiesenfeld. One of the features of SOC is an “attractor.” If points get close enough to the attractor, they remain close to it even if they are disturbed. There is one class of attractors known as “strange” attractors, because they have non-integer dimensions – these are the fractals discussed earlier – but there are other types as well. Three conditions must exist in a system in order for SOC to arise:

non-equilibrium, extended degrees of freedom, and non-linearity. Reality undoubtedly possess the first two; the universe is not (yet) in a state of thermodynamic equilibrium, and there are spatial dimensions to provide at least three degrees of freedom. Whether reality possesses the necessary amount of non- linearity is an open question.

There is some evidence that SOC is a feature of our universe. The prevalence of fractal-type geometric forms, mentioned earlier, and the ubiquitous “pink noise” (or 1/f noise) that permeates reality are good indications. There is a good Wikipedia article on pink noise that gives many examples found in nature. The bottom line is that although entropy usually connotes randomness and chaos, there seems to be an underlying self-organizing principle at work in the universe. Unfortunately, although Bak, et al identified three necessary conditions for SOC, the sufficient conditions have not been found. Maybe when that question is answered, we'll finally be able to solve the reality riddle.

brain has a surface area around 1,500 – 2,000 cm ^{2} . So the amount of information about the universe that is available to the human brain has a theoretical limit of around 2 ×10 ^{6}^{8} bits, which is quite a lot of information. The point is that there is still a certain amount of information that is “unknowable” to any observer. Maybe the role of space and time is to cordon off unknowable information while letting in the information that is important to the observer. See Appendix C.

11

Appendix C – A Few Comments About Space-time

Albert Einstein developed the theory of special relativity using a very simple set of principles. First, the theory only applies to special cases involving uniform, non-accelerating, motion; hence the word special. Second, there is no such thing as absolute motion in a universal frame of reference. All motion is relative; hence the word relativity. Physicists sometimes forget these first two principles, as we will see shortly.

Third, the speed of light is the same for every frame of reference that's in uniform motion with respect to all other frames of reference. As surely as night follows day, these three principles have direct consequences: when clocks and distances are observed from different frames of reference in uniform relative motion to each other, the measurements are different. Distances appear foreshortened in the direction of relative motion, and clocks in relative motion appear to slow down.

That's it. Everything about what I've just stated above can be described by simple mathematical formulas that any high school algebra student can understand. But Einstein wanted use something that all frames of reference could agree on. He used Minkowski space, which wraps space and time in a neat package that is invariant (doesn't change) seen from frames of reference in iniform motion. Minkowski space has three spatial dimensions like the Euclidean (flat) variety used in Newtonian physics, and a fourth dimension is stuck in there as j·c·t. Here j is the square root of -1 (an “imaginary number”), c is the speed of light, and t is time, so this fourth dimension also has spatial characteristics, although it points in the imaginary direction. Voilà! Space and time are now combined into a single invariant entity called space-time. Mathematically, distances are foreshortened and clocks slow down while their spatial coordinates change, just like with the simple high school algebraic formulas.

It is here where I think relativity ran aground. Einstein took his 4-dimensional space-time literally. According to the literal interpretation, all objects in the universe are traveling along “world lines” through space-time at the speed of light. In other words, everything is in absolute motion with respect to the universal space-time frame of reference. Excuse me?? Remember what the second fundamental priciple of special relativity said? It said there is no such thing as absolute motion or a universal frame of reference. That violates the “relative” part of spacial relativity, which is Fallacy No. 1.

Fallacy No. 2 is using Minkowski space to model cases where special relativity doesn't even apply. Take the so-called twin paradox (which isn't really a paradox at all). In this thought experiment, there is stay-at-home Alice and her traveling twin brother Bob. Bob leaves Earth and heads for a distant Planet X, traveling at nearly the speed of light. He reaches Planet X, makes a U-turn and heads home. When he returns to Earth, Alice has aged considerably more than Bob. Minkowski space has been used to explain the apparent paradox. Having traveled a long distance, Bob had to “cash in” some time in order to “buy” space for his world line whereas Alice's world line used up all her time by staying put. The fallacy of that explanation is that in order for Bob to do that, he had to accelerate at least twice; first when he blasted off toward Planet X, and again when he reversed direction back toward Earth. Special relativity applies only to uniform motion, and Bob's accelerations violate the “special” part of special relativity. So using Minkowski world lines within the framework of special relativity isn't the correct explanation, even if the math happens to give the correct results. ^{1}^{2} If any acceleration occurs, you are

12 There's a very easy way to explain the twin paradox without cashing in time for space or using any hand-waving arguments about Bob shifting frames of reference. Suppose there are three individuals, Alice, Bob, and Charlene, born at different times on different planets. Alice was born on Earth, and Bob is an astronaut who just happens to be passing Earth on his way toward Planet X. As Bob zooms by Alice, they exchange biometric data and notice they're exactly the same age! What a coincidence. As Bob zooms past Planet X, Charlene just happens to be zooming by in the opposite direction toward Earth. Bob and Charlene exchange biometric data, and guess what? They're also exactly the same age! Another amazing coincidence. (Thought experiments are very convenient that way.) When Charlene arrives at Earth, would you expect her to be the same age as Alice? Of course not; Alice will be older. The whole “paradox” is easily explained by the fact that the distances between Earth and Planet X in Bob's and Charlene's frames of reference are shorter compared to Alice's. End of story. Nobody shifted frames of reference or accelerated at all during their travels. There is no need for Bob or Charlene to cash in time to buy space in order for Charlene to be younger than Alice.

12

compelled to abandon special relativity, but then the math really starts to get hairy.

The mathematics of Minkowski space worked so well for special relativity that Einstein decided to keep using it for the general case, which includes acceleration and gravity. One of Einstein's insights was that inertial mass and gravitational mass are the same, which he thought was no mere coincidence. He had another brilliant insight that unlocked the whole thing: experiments performed in a gravitational field are indistinguishable from experiments performed in accelerating frames of reference without gravity. This equivalence principle allowed Einstein to learn what gravity does to Minkowski space by examining what happens in an accelerating frame of reference. Hanging on to the idea that every object travels through space-time at the speed of light, he was able to show that two objects falling toward each other from their mutual gravitational attraction is equivalent to their world lines intersecting as they travel through space-time. This means that space-time is curved in their vicinity. In special relativity, Einstein had already proved that mass and energy are equivalent. Therefore, he said gravity is mathematically equivalent to curvature of Minkowski space in the presence of mass-energy.

The math that goes along with this is extremely difficult, and only a few people in the world could handle it ^{1}^{3} when Einstein published his theory in 1915. But those who could handle it produced some very powerful results, including accurate predictions of Mercury's orbital precession, which Newtonian mechanics can't explain, and the bending of light, which Newtonian mechanics does predict, but not accurately. But is 4-dimensional space-time real, or just a mathematical gadget?

There are cases where Newton's laws agree almost exactly with general relativity. There are other cases where general relativity does a better job of predicting how objects move and how light behaves than Newton's laws. There are other things that general relativity predicts that Newton's laws can't explain at all, such as a clock slowing down when it's at the bottom of a “gravity well.” I have a hunch, however, that there could be instances in nature where even general relativity breaks down completely. Note that Minkowski space started out as Euclidean (flat) in special relativity and it ended up being curved in general relativity. The engineer within me sees this as a model perturbation. Engineers apply model perturbations all the time, like when we use linear equations to model non-linear systems. The results will be fine as long as the changes (perturbations) from the rest state are small. When perturbations become too large, the models break down.

Minkowski space is a model with flatness as its rest state and curvature as a perturbation. In some cases, like within or around black holes, gravity curves space-time so much that infinities start cropping up in the math. Engineers view mathematical infinities with a great deal of suspicion – it's hard to find infinities anywhere in nature – but physicists seem to take the math as literal truth, which I find puzzling. Another way to make the model break down from too much curvature is by trying to model the entire universe. Considering the universe as a “something” you can model seems fishy to me. If special relativity teaches us anything, it is that there is no such thing as events occurring simultaneously over cosmological distances. Even talking about what is going on in the Adromeda galaxy “right now” is absurd becuse “right now” only applies to “right here”; so how can anyone model the entire universe all at once and say it has a certain size and structure at a given point in time like a grapefruit or a basketball? Another discrepancy is that relativistic field equations permit backward time travel, as Kurt Gödel proved. Physicists should see a paradox like that as a huge red flag, but apparently they don't.

Cosmolgists readily slip into fallacy mode when they talk about the “size” of the universe or traveling “through” space-time as if it's some kind of fixed coordinate system. Nothing travels “through” space- time and here's why: every “thing” is at the exact center of the universe all the time. Even if I “travel” ten billion light-years from where I am right now, I won't get any closer to the edge of the universe than I was originally. I'll still be exactly at the center. How do you measure the size of something? Well,

13 Unfortunately I am not one of them, but Arthur Eddington was. The physicist Ludwik Silberstein approached Eddington once and said, "Professor Eddington, you must be one of three persons in the world who understands general relativity." Eddington paused, unable to answer. Silberstein continued, "Don't be modest, Eddington!" Finally, Eddington replied "On the contrary, I'm trying to think who the third person is."

13

you pick two points that define the extent of the thing you're trying to measure, then you place a measuring stick between them. Alternatively, you can do what modern surveyors do: stand at one point, shine a beam of light at a corner reflector placed at the other point, measure the time it takes for the light to bounce back to you, and covert that time into distance. Okay, so which two points do you pick when measuring the size of the universe? That's a trick question, because you won't find any. Every point is always at the center, so no two points can be found that define the extent of the universe. How big is the universe? All anyone can say is that it's really big.

When you look out into space on a clear moonless night, what do you see? Well, you see nearby planets and stars, and maybe even the Milky Way if you are far away from city lights. Beyond the Milky Way, you might see the Andromeda galaxy, which is actually quite close to us in cosmological terms, appearing as a fuzzy patch of light at 40° northern latitude. Everything looks flat, Euclidean, and 3- dimensional; in other words, things look pretty “normal” in our little corner of the cosmos. If your eyes were as good as the Hubble telescope, you might see some really distant objects, and here's where things get strange. The farther out you look, the farther apart those objects seem to be from each other, but that's an illusion. You see, when you look at objects that are really, really, really far away, you're looking at a universe that's much younger and much smaller than it is today (to the extent that terms such as age, smallness, or today make any sense at all when talking about the universe). ^{1}^{4} So very distant objects were actually much closer to each other than they appear to be from our vantage point. ^{1}^{5} In fact, looking at the distant universe from any vantage point is like looking at it through a fun-house mirror. It is literally impossible for our puny little 3-dimensional brains to properly visualize a universe in its entirety, given that every point is always at the exact center; therefore, we substitute a fictional version of reality that we can wrap our heads around: a giant 3-dimensional coordinate system with a clock ticking away in the background (or alternatively, a 4-dimensional space-time coordinate system, which isn't much better). Unfortunately, cosmologists aren't any better than the rest of us at comprehending the universe in its entirety. ^{1}^{6}

Here's the bottom line: space-time is not a “thing” you can travel through, like a ship traveling through the ocean. Traveling is only meaningful when you do it relative to something else. Minkowski space- time simply provides a way to take measurements between objects and events so that all frames of reference can agree on those measurements. The problem is, those measurements lose meaning entirely (in our 3-D version of reality) when the objects or events are too far apart, too far away, or too long ago.

I consider general relativity to be a big improvement over Newtonian physics, but I can't help the feeling that there is something fundamentally wrong with using 4-dimensional space-time as a model of reality. I think physicists will eventually come up with a more complete theory of space, time, and reality; I believe a theory based on quantum entropy and data processing will emerge as the right one.

There's an old saying about space and time: Time is what keeps everything from happening all at once, and space is what keeps everything from happening to me. I believe there's a grain of truth in that. Space and time are there as a form of censorship. Think of the uncertainty principle. There is a fundamental limit to the amount of information about the universe that is knowable. The purpose of space and time is to draw a curtain in front of the information we're not permitted to know. Think of the entangled photons in Alice's and Bob's labs during the EPR experiment. Space and time can't isolate two observers that already “know” everything there is to know about each other, so as far as those two photons were concerned, there can be no space or time between them no matter how far they traveled.

14 This is according to the big bang theory. I don't doubt that the universe had a beginning where everything was crunched together, because from today's vantage point there is every indication that the universe really did start out that way. However, I'm not sure at all about the details of how this happened or why the universe decided to have a beginning. Appendix I offers a somewhat farcical account of the current scientific belief system in that regard.

15 If you could see all the way back to the big bang, which was an infinitesimal point, it would seem to fill the entire sky. Now that's the ultimate optical illusion.

16 … which is why cosmologists keep referring to silly, nonsensical things like traveling through space and time, or measuring the size of the universe.

14

Appendix D – Space and Time with Quantum Weirdness

Many “quantum” phenomena can also be described in classical terms. Albert Einstein wasted much of his career vainly attempting to debunk quantum mechanics by explaining away the results of experiments that violated his cherished classical principles. Take for example, Young's famous double- slit experiment, named after Thomas Young (1773-1829). Richard Feynman is quoted as saying, “All of quantum mechanics can be gleaned from carefully thinking through the implications of this single experiment.” The gist of the experiment is that particles aimed at a double slit produce wave interference patterns on a detector screen placed behind the slits. But if an observer tries to measure which of the slits the particles pass through, the wave interference pattern is destroyed and the particles behave like bullets instead of waves. Einstein says, “Okay, I can explain that. If a particle detector is placed at one of the slits, the measurement disrupts the flight of particles through the slit, destroying the wave interference. It's just a problem of the measuring device imparting extra momentum to the particles.” That seems like a plausible explanation, but it's wrong, as we shall see.

A variation of Young's experiment is the so-called Quantum Eraser Experiment. This experiment is

truly weird and it is impossible to explain it using classical arguments. A beam of photons (light) is directed at a double slit, one photon at a time. Each slit contains a beta barium borate crystal that converts one photon into a pair of entangled photons, which go in two different directions: One entangled photon goes toward a photon detector D0 that is set up to look for interference patterns. The other entangled photon goes into an “idler circuit” where quantum weirdness takes place. The idler circuit has partially-silvered mirrors that give idler photons from Slit A or Slit B a 50/50 chance of being reflected or passing through the mirrors. If an idler photon comes from Slit A, it has a 50% chance of being reflected toward photon detector D1. Otherwise, it passes into a quantum eraser, which combines its path with the path of idler photons from Slit B that pass through their mirror. These combined paths go to detector D2. If an idler photon comes from Slit B, it has a 50% chance of being reflected toward photon detector D3. Otherwise, it passes into a quantum eraser, which combines its path with the path of idler photons from Slit A that pass through their mirror. These combined paths go to detector D4.

For each primary photon aimed at the double slit, one entangled photon arrives at the vicinity of D0, with apparently no way of telling which slit this photon came from. Over in the idler circuit, there are four possibilities:

1. A photon is detected at D1, which means it definitely came from Slit A.

2. A photon is detected at D3, which means it definitely came from Slit B.

3. A photon is detected at D2, which means it could have come from either Slit A or Slit B.

4. A photon is detected at D4, which means it could have come from either Slit A or Slit B.

Now, let us turn our attention back to the D0 detector. This detector is placed on a track so it can be moved back and forth parallel to the positions of the slits. Charting millions of photon “hits” versus

distances along the track should produce a clear wave interference pattern, since the photons originated

in the beta barium borate crystals with no a priori way of telling which slit they came from. However,

the raw results show only an ill-defined smudge pattern of hits, containing no information at all. It seems that this experiment is a complete failure. But suppose we correlate the photon hits at the D0 detector with hits from their entangled twins in the idler circuits using a coincidence recorder. When we do this, four distinct patterns emerge at D0:

1. D0 hits correlated with D1 hits show bullet-like particles coming from Slit A.

2. D0 hits correlated with D3 hits show bullet-like particles coming from Slit B.

3. D0 hits correlated with D2 hits show a “positive” wave interference pattern. ^{1}^{7}

4. D0 hits correlated with D4 hits show a “negative” wave interference pattern. ^{1}^{8}

17 A positive light pattern would be bright, dark, bright, dark …

18 A negative light pattern would be dark, bright, dark, bright …

15

The sum of these four patterns produces that awful smudge pattern at D0, but breaking up the smudge into correlated groups of photons reveals its true nature: Both particle and wave patterns emerge, based on the “decisions” the entangled photons made at the half-silvered mirrors in the idler circuits. Now here's the truly weird part: If you make the paths of idler circuits much longer than the D0 path, photons will be detected at D0 before any correlated photons could arrive at D1, D2, D3 or D4. It's as

if the D0 photons knew ahead of time where their entangled twins would be detected. Observations in

the “present” seem to depend on events in the “future.” These results would leave Einstein scratching his head because there is simply no classical explanation for them – there is nothing happening in the idler circuit that could “disrupt” photons arriving at D0 or impart any momentum to them.

This experiment is similar to John Wheeler's delayed choice thought experiment. At first blush, it seems to violate causation by sending signals backwards in time. It also seems to violate the speed of light limitation from special relativity because there is no limit, in principle, to how far away D0 can be from the idler detectors and still be affected by events taking place there. However, there is no violation

of either causation or relativity because no “information” is really being sent backwards through time or

instantaneously across space. (No, you can't use this kind of apparatus to instantaneously send Morse code signals across the galaxy. Darn.)

The correct interpretation is this: Until and unless an observation is made, there is literally no information regarding the past. Thus, even if the idler photons are detected millions of years after the photons are detected at D0, no distinct patterns will emerge at D0 until the photons are properly correlated. Now here's a very important corollary to this: Once an observation is made, there is no way to “undo” the information and change history. This is because of the deep connection between information and entropy. Remember what the second law of thermodynamics says: Entropy cannot be destroyed. There is a corresponding law of information: Information cannot be destroyed. Once something happens, it happens for good. The reason backward time travel is impossible is because it violates the second law of thermodynamics. ^{1}^{9} Another corollary is that you can't see into (or remember) the future because information about the future simply doesn't exist. Information is created in the present, permanently becoming the past. (Note that observation doesn't necessarily require intelligence. Two particles colliding is the same as an observation as far as those particles are concerned.)

In the final analysis, the Quantum Erasure Experiment redefines space and our concept of past, present,

and future. In our puny little 3-dimensional brains, it looks like the D0 photons and their idler twins

head in different directions and exist in different places and are detected at different times. From the perspective of the photons themselves, however, there are no such spatial or temporal distinctions. What we humans see going on in space and time (at least in this experiment) is clearly an illusion. There is a much more fundamental reality concerning space and time than is visible in our workaday world. It's important to remember that quantum mechanics never violates causation or special relativity.

In most instances it's invisible and hiding just below our threshold of perception. But its effects are real

and they cannot be ignored or glossed over. If you ask Nature sensible questions, She will always provide sensible answers. Nonsensical answers are always the result of asking nonsensical questions.

A deterministic universe is a dead universe, devoid of information and meaning. Information can only

arise by shuffling the deck and introducing uncertainty and entropy. Entropy and uncertainty are not the enemy of reality; they are the very things that give information, life, and meaning to it.

19 Arthur Eddington (one of the few people who understood general relativity in 1915) is famously quoted as saying, “If someone points out to you that your pet theory of the universe is in disagreement with Maxwell's equations – then so much the worse for Maxwell's equations. If it is found to be contradicted by observation – well these experimentalists do bungle things sometimes. But if your theory is found to be against the second law of thermodynamics I can give you no hope; there is nothing for it but to collapse in deepest humiliation.” Whatever physicists decide the “theory of everything” turns out to be, it will have to account for the Quantum Eraser Experiment and it will have to conform to the second law of thermodynamics. Otherwise, it will collapse in deepest humiliation (using Eddington's words). I'm convinced that any theory that is based on classic space-time or determinism in any way will fail because of this.

16

Appendix D.1 – Setup for the Quantum Eraser Experiment

The drawing above depicts the apparatus used in the delayed choice quantum eraser experiment. Light from a laser, shown as the large blue arrow in the upper left corner, is emitted one photon at a time onto screen with two slits, marked A and B. The path of photons from Slit A are marked by a green line and the path of photons from Slit B are marked by a red line.

A crystal of beta barium borate, Ba(BO _{2} ) _{2} , shown by the pink rectangle, splits each photon into two

half-frequency entangled photons. The first entangled photon heads toward the upper right through a lens that directs it toward a detector marked D0, which is set on a track that can move back and forth. The second entangled photon heads toward the lower right into the “idler circuit.” Photons originating from Slit A follow the green paths, and photons originating from Slit B follow the red paths.

In the idler circuit, a prism, marked P, splits the red and green paths in two different directions. Half-

silvered mirrors, marked M, reflect 50% of the photons and allow 50% of them to pass through. There

are four photon detectors marked D1, D2, D3 and D4. An entangled photon entering the idler circuit triggers one detector, each detector having a 25% probability of registering it. D1 will only register photons on the green path from Slit A and detector D3 will only register photons on the red path from Slit B, providing “which way” information. Photons registered by D2 or D4 have an equal probability

of coming from either Slit A or Slit B, erasing any “which way” information.

A coincidence counter records the arrival of the photons in the idler circuit detectors and correlates them

with the arrival of photons detected by D0. Photons registered by the movable D0 detector that are correlated with photons registered by D2 or D4 produce interference patterns, but photons registered by D0 that are correlated with photons registered by D1 or D3 do not produce interference patterns. Because the length of the light path to the first two mirrors is much longer than the light path to D0, it's impossible for information about which path an idler photon took at the half-silvered mirrors to be transmitted at the speed of light back to its entangled partner before the partner is registered by D0.

17

Appendix E – The Problem of Spin

There is the problem with angular momentum – at least it's a problem for me. Let's start off with uniform motion. According to both special and general relativity, there is an exact symmetry between the uniform motions of two objects. If I say Object A moves at velocity + v with respect to Object B,

then you can say Object B moves at velocity - v with respect to Object A, and we would both be right.

A similar equivalence exists in general relativity between gravity and objects accelerating in a straight

line. An accelerating frame of reference without a gravitational field is indistinguishable from non- accelerating frames of reference in a gravitational field, and freely-accelerating frames of reference in a gravitational field are indishtinguishable from non-accelerating frames of reference without a gravitational field. ^{2}^{0}

Here's another way the gravitational equivalence works: When a rocket ship accelerates, its occupants feel the acceleration just like they're immersed in a uniform gravitational field pointing in the opposite direction of the rocket motor's thrust. As far as the rocket ship's occupants are concerned, the entire universe is accelerating backwards due to a gravitational field extending to infinity – except the rocket ship itself, which is held perfectly still by the thrust of the rocket motor. Distant clocks ahead of the rocket ship are seen by the occupants as being at the top of a gravitational well, and those clocks seem

to run fast. Distant clocks behind the rocket ship are seen by the occupants as being at the bottom of a

gravitation well, and those clocks seem to run slow. ^{2}^{1} Although this scenario seems unlikely, it could be physically realized if there were an actual a uniform gravitational field extending forever. In other

words, the equivalence between gravity and acceleration makes sense logically, even if it isn't a very plausible explanation of what's really happening. So far, so good.

It's rotational equivalence described by relativity that doesn't make sense to me. When an object spins,

it knows it's spinning because every atom accelerates toward the center of rotation and there is a

centrifugal force felt in the opposite direction. But unlike linear acceleration, it doesn't take a rocket motor to keep a spinning object accelerating toward the center of rotation – it just keeps spinning and accelerating forever all by itself. Now here's the part I don't get. All the literature I've read conerning general relativity state that the sensation of spinning is somehow linked to gravity and bending of space- time. Some of these books refer to spinning as motion relative to “the fixed stars,” which is a shorthand for all the mass in the universe. ^{2}^{2} So according to this account, the spinning sensation is caused entirely by a mysterious gravitational influence on the spinning object by all the mass in the universe. This is saying that the sensation of spinning is equivalent to being surrounded by a universe that's spinning. If there were no “fixed stars” there would be no sensation of spinning.

I'm having some serious problems with that equivalence. First of all, since mass is more or less uniformly spread throughout the universe, the gravitational influence of the universe (i.e., the “fixed stars”) on an spinning object would be equivalent to being surrounded by a hollow sphere having a uniform mass density. In the Newtonian world, the gravitational field inside such a hollow sphere is zero because the masses in all directions cancel out. The same thing is true in general relativity if the hollow sphere is stationary. Okay, so let's set the sphere into rotation around an object and see what happens. According to Newtonian physics, nothing happens. However, according to the equations of general relativity, rotating matter-energy “out there” has peculiar effects on the local space-time “right here.” So when I spin around, it's exactly equivalent to the entire universe rotating around me and causing bending and twisting of space-time near me.

I'm just an engineer who doesn't know how to solve Einstein's equations, except for the most simple

20 This can be expressed mathematically as bending space-time, although I don't think that's what reality is.

21 This is not to be confused with the “red” and “blue” Doppler shifts due to relative motions. The effect of acceleration/gravity on clocks is in addition to those Doppler shifts.

22 I dislike the term “fixed stars” intensely because nothing in the universe is fixed. I use that term here only because many other authors use it when talking about rotating bodies.

18

cases, so I guess I'll just have to go along with those results. ^{2}^{3} But here's the thing that bothers me: If I'm spinning around at one revolution per second, then according to the general relativity equivalence principle, the same spinning sensation could be produced by the universe rotating around me at one revolution per second, and there would be no way in principle for me to know which scenario is causing my motion sickness. In the case of linear acceleration, I can be tricked into believing the fantasy that the entire universe except me is accelerating due to gravity. Although such a thing is implausible, it doesn't violate the principles of general relativity. But in the case of rotation, how can the sun, which is eight light-minutes away, let alone the “fixed stars” much farther away, make one revolution around me every second without going faster than light? I know that a universe rotating around me once per second is not only implausible, it's impossible according to the very principles the theory is based on; therefore, I cannot be tricked into believing that such an equivalence is true.

Let's consider the hypothesis that without “the fixed stars” exerting their mysterious gravitational influence over me, there would be no sensation of spinning. This sounds more like astrology than physics, and something tells me this picture just isn't right. If it were true and I were the only object in an otherwise empty universe, I could attach rocket motors to myself and spin around as fast as I want and not feel a thing. It seems much more reasonable to suppose that angular momentum – unlike linear momentum and linear acceleration – is an intrinsic property of an object that doesn't depend on relationships with other objects or other frames of reference. In other words, a spinning object accelerates toward its axis of rotation regardless of what the rest of the universe is doing.

There is a quantum-mechanical property known as “spin.” The basic unit of spin is given by Planck's constant, h, which is equal to 6.626068 × 10 ^{-}^{3}^{4} m ^{2} -kg/sec. For technical reasons, physicists use the so- called “reduced” Planck constant, ħ or “h-bar”, which is equal to h divided by 2π. Due to an historical accident, elementary particles turn out to have quantum spin values that are multiples of ½ ħ. ^{2}^{4}

Now notice the physical dimensions of Planck's constant: m ^{2} -kg/sec. They are exactly the same dimensions as the angular momentum, L, that a pitcher imparts to a baseball when he throws a curve. Quantum physicists go to great pains to stess that their spin property ħ has absolutely nothing to do with the L of spinning baseballs, as if they're embarrassed that there's a connection between their field of study and everyday reality. In fact, quantum physicists leave out ħ altogether when talking about spin, saying that the electron has ½ spin, a photon has a spin of 1, a graviton a spin of 2, etc., as if it's a pure number. However, the units of Planck's constant describe a spinning physical object and that fact is undeniable. Planck's constant is one of the most fundamental constants in nature – it's the most fundamental constant in quantum physics – and isn't it odd that this most fundamental contant happens to have the exact dimensions of angular momentum?

I think I know why physicists are so reluctant to admit there's any connection between ħ and L: the

definition of quantum-mechanical spin says that it is an intrinsic form of angular momentum carried by elementary particles; however, general relativity stresses that there are no intrinsic motions – all motions

are relative, including spin. But what if that isn't true; i.e., what if spin really is an intrinsic property of

a rotating object, not only for electrons but for baseballs too? I may be wrong, but this would conflict

with general relativity, and it could bring down the entire edifice. That is something physicists don't like to contemplate, but it doesn't bother me at all. Engineers don't have strong emotional attachments to theories. If one theory doesn't work, we just find another one that does.

As you may have guessed, I have some serious reservations about general relativity. As I've said over and over, it works pretty well when describing local phenomena and it makes better predictions than Newton's laws. I just think that general relativity gives us a distorted picture of the universe as a whole, and the reality riddle won't be solved until science abandons that theory or modifies it.

23 I'm only repeating what I've read in the literature; I lack the mathematical skills to solve Einstein's equations for a rotating universe on my own. If I've misinterpreted what I've read, I'm sorry.

24 It took quantum physicists several tries until they finally got the value of spin right.

19

Appendix F – A New Theory of Gravity

Newton's theory of gravity and Einstein's general relativity both try to explain the obvious fact that massive objects tend to pull toward each other. Newton's theory postulates a gravitational potential that is inversely proportional to distance, which produces a force that is inversely proportional to distance squared. Einstein's theory treats gravitational attraction as geometric properties of curved space-time. In my opinion both theories are flawed because they are classical-deterministic and background- dependent theories.

Erik Verlinde has approached the question of gravity and inertia from an entirely new direction. According to his paper On the Origin of Gravity and the Laws of Newton, gravity is an emergent force that originates from entropy. He derived both Newton's equations and Einstein's equations of general relativity from entropy; there is certainly nothing new about those equations, and he will have to go much further than that if he expects his theory to gain any traction. However, I truly believe that when and if the theory of entropic gravity is fully developed, it will lead science in a new and better direction than any of the other prevailing theories being explored at the present time.

If I understand the concept of entropic gravity correctly, it starts out from the holographic principle that says that all information contained within any volume of space is actually encoded on a hypothetical surface surrounding that volume. Even mentioning the concept of a holographic universe puts most physicists' teeth on edge because it sounds all squishy and new-agey, like talking about energy vortexes and crystal therapies. Yet even some old-school physicists, like Leonard Susskind, take the holographic principle seriously because it happens to explain the properties of black holes rather nicely. ^{2}^{5}

Verlinde begins his paper with the idea of entropic force, using forces in a polymer strand immersed in a temperature bath as his model. When you pull on the strand, it exerts a force that tends to resist straightening it. Verlinde says that this force has nothing to do with energy – it's caused simply by the arrangement of the atoms of the strand in space. Pulling on the strand and stretching it reduces the entropy of the strand's hologram, and the natural tendency of all things is to maximize entropy; hence, a force resists the change. Likewise, the reason that two masses fall toward each other is because the configuration of “togetherness” maximizes the entropy of their hologram. The paper then goes on to derive Newton's laws of inertia in the same bottom-up manner. I would urge anyone who has an interest in this subject to read Verlinde's paper for a much better and more complete treatment than I can possibly give it here.

The salient point is that gravity and inertia are both emergent properties that arise naturally from thermodynamics. You don't need a hammer to pound nature into submission, forcing it to give us a theory of gravity. There is no need to use exotic strings vibrating in 10-dimensional space-time continua or incomprehensible mathematics to explain these things – they should explain themselves using rather basic and simple first principles. Pressure and temperature are also emergent properties, and just as neither pressure nor temperature exist on the submicroscopic level, neither do gravity and inertia. All four of these phenomena can be shown to be closely related to entropy, but entropy only applies to aggregate collections of particles and not to the individual states of the particles themselves.

There are many critics of Verlinde's work, who view any deviation from the orhodoxy of relativity and quantum field theory as heresy, punishable by excommunication from the scientific community. They argue that gravity is a reversible process, whereas entropy isn't. By their logic, a reversible process like gravity simply cannot emerge from an irreversible process like entropy. Well, pressure is also a reversible process and it is clearly related to entropy. We certainly don't need curved 4-dimensional space-time to explain pressure or temperature, and we shouldn't need it to explain gravity either.

25 Susskind (the Father of String Theory) gave some very convincing lectures on the holographic principle as it relates to black holes. Lately, he's been sidetracked as being an advocate of the anthropic principle. In 2004 he engaged in a heated email exchange with Lee Smolin, who argued that the anthropic principle is not science.

20

Appendix G – Trying to Erase Relativity

I recently watched a video of a lecture on string theory by Leonard Susskind at Stanford University. He introduced the concept of strings with a thought experiment involving a box of elementary particles that are moving relativistically. ^{2}^{6} He then boosts the box to nearly the speed of light in the z-direction of space. What happens next is very interesting. According to Susskind, as seen from our frame of reference, the particles start behaving “classically” in the other two dimensions, the x-dimension and y- dimension. (By classically, he means according to Newtonian physics.) It's as if he magically erased relativity from the picture with a wave of his marker pen. He then went on to lecture about strings as both relativistic and classical objects; but strings really aren't the point of what I have to say.

After I thought about this lecture for a while, I had a deep insight. Instead of a box of particles, let's imagine a room full of ordinary objects (which could include people) that are all moving relativistically with respect to each other. ^{2}^{7} We can make the room as big as it needs to be. If we boost the room to just under the speed of light in the z-direction, almost all of the “available” motion is “used up” in the z- direction, so there is very little available motion left for the other two directions. From our perspective, the Lorentz transformation slows down time and everything shrinks in the z-direction, making the room slower, flatter, and more 2-dimensional. What happened was that we have (almost) eliminated time in that room by replacing its dimension with the z-dimension. ^{2}^{8}

But have we eliminated relativity and created a classical world? Well, everything in the room does move a lot slower in the x- and y-directions. But so does light, which makes even these slow motions seem “relativistic” in the room. According to special relativity, light cones project from all objects into time (the future and the past) in Minkowski space-time; events not inside an object's light cones are “unknowable” to that object. When everything is boosted in the z-direction, those light cones are in the z-direction, and they become very narrow. Also, information sent from one person takes a very long time to reach another person. Now, you'd say that's just a matter of time scaling, so doing the reverse Lorentz transformation would speed everything back up in our reference frame. That's true, but all of the special relativistic effects are still there in the boosted room because we can still see them from our reference frame; also, when we reverse the Lorentz transformation, those effects return in all their glory. We didn't eliminate any special relativistic effects in the room by boosting it in the z-direction.

What about gravity? Well, the Lorentz transformation increases all the masses in the room. The increased masses also show up as increased inertias in the x-y plane, which makes it harder to change motions of objects in those directions. Now gravity isn't really too important in a room full of ordinary objects, but we can increase the size of the room to include the solar system where gravity plays a bigger role. Let's boost the room to nearly the speed of light in the direction perpendicular to the plane of the planets' orbits. We'll stick to good ol' Newton's theory of gravity and inertia, because it works pretty well in the solar system. Let's see what happens (I promise to keep the math very simple).

First, let's make the planets' orbits cicular because that's easier. The centrifugal force of a planet revolving in a circle is equal to mv ^{2} /r, where m is the planet's mass, v is the planet's velocity, and r is the radius of the circle. The gravitational attraction between the sun and the planet is equal to GMm/r ^{2} . Here, M is the mass of the sun and G is the universal gravitational constant. By setting the centrifugal

force equal to the force of gravity and solving for velocity, you get v = √GM/r. Notice that the mass of the planet (small m) disappeared from the formula. Now let's boost the solar system in the z-direction to around 0.994 times the speed of light. The Lorentz tranformation increases the masses of the sun and

26 Susskind is a particle physicist. Like most particle physicists, he looks at the universe as collections of elementary particles.

27 Actually, everything always moves relativistically. We just don't notice relativistic effects because ordinarily they're too small to observe.

28 Remember that according to special relativity, all objects “fall through” Minkowski space-time at the speed of light. Ordinarily, our “fall” is mainly in the time dimension, but the boosted room is moving very fast in the z-direction instead. Time consumes the z-dimension and we wind up with one less dimension to worry about.

21

the planets by a nice, round factor of 9. The sun and planets turn into flat pancake-shaped objects, but the radii of the orbits in the x-y plane don't change. Let's assume that Newton's laws in “Pancakeland” are the same as everywhere else. When you plug in the new mass of the sun, 9M, into the formula for v,

you see that the orbital velocities increase by a factor of √9 = 3. Adding the effect of time dilation will divide the new v by 9, reducing it to 1/3 of its original value. On the other hand, the other velocities in “Pancakeland” are reduced to 1/9 of their original values. My point is that in the boosted solar system, gravity-induced velocities actually increase relative to other velocities; so the effects of gravity certainly don't go away. In summary, to the extent that any special or general relativistic effects exist in the non- boosted room, boosting the room close to light speed won't make those effects disappear or even diminish them. Those effects must still be considered whether the room is boosted or not.

But we've only gone to 99.4% of the speed of light, so why not “go all the way” and see what happens when we boost the room to 100% of the speed of light? This should erase relativity because all motions would cease and time would end. There would be a timeless, static room where nothing ever changes. Well, as every high school physics student knows, you can't do that. The standard reason is that the masses of all objects in the room would equal infinity at the speed of light, requiring the addition of infinite energy to make that happen. But there's a twist to this story that's much more interesting.

Transforming a system of objects from one reference frame into another is the same as data mapping, which I discussed a little bit at the end of the main essay. You can transform data about those objects concerning their positions, velocities, etc., into a new space. The new data may look different by altering positions, velocities, etc., of objects in the new space; however, the original data are still present, but encoded in a different way. Now here's the kicker: if you reverse the transformation, you should get the original data back and come up with exactly the same configuration of objects you started with. But if you don't, your transformations aren't right. That's what led me to a very deep insight.

Remember Claude Shannon's statement: information equals entropy. Since the second law of thermodynamics states that entropy cannot be destroyed, there's an immediate corollary that says information cannot be destroyed. Remember, even at a tiny ε below the speed of light, objects in the boosted room still move very slowly in the x-y directions. But right at the speed of light, time stops completely. By boosting a room to the speed of light, all information concerning x-y motion is lost and reversing the data mapping won't retrieve any of that information. Entropy was destroyed. If we try to return “Pancakeland” back to normal and reverse the Lorentz transformation, the planets would just hang motionless around the sun, which makes no sense. It seems that whenever we try to defeat entropy, the universe conspires against us somehow to prevent us from doing that. Nature increases the masses of moving objects in order to stop us from destroying entropy, implying that mass, inertia, and gravity are somehow fundamentally connected with information through entropy. ^{2}^{9} Entropy and information keep poking their noses into reality in unexpected ways.

If scientists decide to use a different model of the universe, they will have to account for all information as it's presented to us in this universe. If someone creates a timeless model of the universe, information concerning time and motion here must be encoded into that universe, even if it takes extra dimensions to do it. If someone creates a theory of the universe without gravity or relativity, information concerning the effects of gravity and relativity in our universe still must be encoded into that model somehow. It's no good to take shortcuts and omit some data from your theory – if you even try to do that, the universe will conspire against you and render your theory invalid. Entropy is a stern master.

Getting back to Susskind's video lecture that started this whole thing off, I can't comment on the validity of boosting a box full of particles and treating them classically within the context of string theory. It's probably perfectly okay to do that in string theory. ^{3}^{0} I only mentioned his lecture because it triggered some insights about how information seems to be inextricably linked to the reality riddle.

29 Refer to Erik Verlinde's entropic theory of gravity and inertia, discussed in Appendix F.

30 I must admit that string theory is still way over my head, even after I watched the lecture.

22

Appendix H – There's Trouble on the Horizon

No essay on scientific theory would be complete without a discussion about black holes. They are the quintessential features of general relativity and physicists tend to use them to explain everything from soup to nuts. As they say, “When you're in an argument, throw in a black hole to prove your point.” Unfortunately, things are going to get a little “mathy” right about now. However, if you ignore the math and just read the words, you'll still get the gist of what I'm trying to say.

Karl Schwarzschild was the first person to calculate an exact solution to Einstein's field equations. He accomplished that feat in 1915. ^{3}^{1} He did it for a single spherical non-rotating mass, which later became popularly known as the black hole.

Schwarzschild used the spherical coordinate system, where a point in space is defined by a distance, r,

from the origin and the latitude and longitude are the two angles, θ and φ. Here's his relativistic solution

of the non-spinning black hole in all it's glory:

ds ^{2} =

c ^{2} (1 – R _{s} /r) dt ^{2} – dr ^{2} / (1 – R _{s} /r) – r ^{2} (dθ ^{2} + sin ^{2} θ dφ ^{2} )

On the left-hand side of the equation is the quantity, s, which is known as the “relativistic metric,” which is how “distances” are measured in space-time. The quantity “ds” means “the change in s,” and ds ^{2} is the square of the change in s – not the change in s squared.

The distance r in the equation is the radial distance of an object from the “center” of the black hole as seen from afar. You can measure that in meters, miles, or any other units you like.

The constant R _{s} is called the Schwarzschild radius, named after Karl, of course. It's equal to 2MG/c ^{2} , where G is the good ol' universal gravitational constant we all know and love from high school physics,

M is the mass of the black hole, and of course c is the speed of light. A lot of weird things happen at the

Schwarzschild radius, as we will see shortly.

The last term of the equation is ugly, so let's get rid of it. If we let objects only move along radial lines, then the angles are constant and their changes, dθ and dφ, are zero. We are left with this:

ds ^{2} = c ^{2} (1 – R _{s} /r) dt ^{2} –

dr ^{2} / (1 – R _{s} /r)

You can relate the “relativistic metric” with “proper time,” τwhich is simply the time shown on the dial

of your wristwatch as you travel through space-time sitting in your easy chair. ^{3}^{2} Proper time is

designated as τ, where dτ ^{2} = ds ^{2} / c ^{2} .

zero everywhere in the universe. ^{3}^{3} So if you want to study light rays, you must set the right-hand side of

the equation equal to zero:

Light rays have the peculiar property that dτ ^{2} for them is always

c ^{2} (1 – R _{s} /r) dt ^{2} –

dr ^{2} / (1 – R _{s} /r) = 0

(for light rays only)

Solving the above formula for dr/dt gives the radial velocity of light at any position r:

velocity of light at r = dr/dt = ± c (1 – R _{s} /r)

Notice that when light is really far away from the black hole, r is really large, and R _{s} /r goes to zero, so there velocity of light is just equal to ± c, which is nice. But something peculiar happens when r equals R _{s} . There the velocity of light equals ± zero! That's right: light goes nowhere at the Schwarzschild radius. It just stops cold, which I guess is why a black hole is black. (By the way, the same thing

31 He did it while serving in the German army on the Russian front in WWI. He died in 1916 from a rare skin disease.

32 You can use either an expensive Rolex or a cheap Timex to measure the relativistic metric; they'll both work the same.

33 If photons wore wristwatches, the hands on their watches would never move.

23

happens to a light ray that's traveling horizontally along the surface of the event horizon.)

But special relativity says that light has a constant speed, c, everywhere. So what happens at R _{s} ? Well infinity happens, because dr ^{2} / (1 – R _{s} /r) equals infinity when r = R _{s} . This is called a “singularity” and engineers don't like singularities one bit, especially when they just pop into space for no good reason. Now there's another singularity for the dt ^{2} term at r = 0. That one makes sense if the black hole's entire mass is squeezed into a tiny point at the center of the hole. But nothing is physically going on at r = R _{s} to make a singularity happen there. It's like the Cheshire cat in Lewis Carroll's Through the Looking-

Glass, and What Alice Found There.

It's all smile and no cat.

Physicists pooh-pooh the whole thing by calling it “just a coordinate singularity.” Here's what one physicist had to say about it:

“Although the singularity at [the Schwarzschild radius] was long suspected to be a coordinate singularity, this was not proved until the late 1950s, when a coordinate transformation was found that eliminated the singularity. Additional coordinate transformations have been discovered since. These will not be considered here, as they are mathematically complex.” ^{3}^{4}

Now outside and inside the black hole, the equation is just fine. It's only when r equals R _{s} that things get messed up. Now an engineer like me might “fix” this glitch by adding a small imaginary number to the denominator to make sure it's never zero, like this: (1 – R _{s} /r + jε). But that would be cheating.

(Actually, the equation isn't fine at all inside the Schwarzschild radius. Here, signs are flipped and space and time reverse roles. This means that the center of the black hole isn't a place, it's a time. Specifically, it's the “future” where everything comes to an end. Inside the Schwarzschild radius, it's a crazy bizarro world; or as Dorothy would say, “Toto, I've a feeling we're not in Kansas anymore.”)

The surface where all hell breaks loose in the equation of a black hole is euphemistically called the “event horizon.” Now if light slows down and stops at the event horizon, what happens when a typical object, like a chair, a refrigerator, or a person is dropped into a black hole? Well, as seen from outside the event horizon, that object will approach the event horizon, but it will never get there. That's right, it just stops. But physicists say that the object itself will experience going right through the event horizon like it was nothing – they say it's just a matter of watching things from different perspectives.

Let's do an experiment. Suppose Alice lowers Bob into a black hole with a rope. ^{3}^{5} Say Bob starts out 10 feet over the event horizon and he yells, “Let out 5 feet of rope.” Alice lets out 5 feet of rope, but Bob only moves an inch lower. So Bob yells, “I said, let out 5 feet of rope!” and Alice does it again, but Bob only moves another ¾ of an inch. This keeps happening, over and over. Alice keeps letting out 5 feet of rope, and Bob moves less and less each time. Finally, Alice lets out miles and miles of rope, but Bob never reaches the event horizon. What's happens is that the closer Bob gets to the event horizon, the more stretched out distances become in his frame of reference. So if Bob finally lets go of the rope and falls in, I think he would see himself as falling, but the event horizon would keep receding away from him. It's the only way I can see to properly map Bob's experiences as data in Alice's space.

This raises the question of how a black hole is formed in the first place. If stuff on the outside stops before it gets to the event horizon, then how did all that stuff get in the inside? And if Bob does get inside, how do we resolve the serious data mapping discrepancies between what Bob experiences and what Alice sees? Remember what happened when we tried to erase relativity (and information) by boosting a room full of objects to the speed of light? Nature intervened and prevented that from happening. Alice's and Bob's situation here is very similar. As Bob falls toward the event horizon, Alice sees Bob's watch slowing down, and sees him flatten out, becoming more 2-dimensional near the

34 Too mathematically complex to consider here? Try me. I think you're trying pull one over on us, Mr. Physicist. (I'm only kidding. I'm sure what the physicist said is true.)

35 You might ask how Alice keeps from falling in. Well, if you must know, she uses rockets to stay aloft.

24

event horizon, but never quite getting there. But does nature intervene and keep Bob from actually crossing over, thereby preventing a huge data mapping discrepancy? No. At least not according to the equations. I think this points out a difference between engineers and physicists. When engineers see an equation that expresses nonsensical results, we reexamine all the assumptions that were used in deriving that equation. Physicists often fail to do this. Regarding the Schwarzschild equation, I believe the underlying assumption was that a given mass can physically fit inside its Schwarzschild radius.

Things get even worse when the black hole is spinning. Remember the other singluarity at the center of the black hole, representing the “future”? From what I've read on the subject of spinning black holes, it's possible to unveil the “future” as a “naked singularity,” visible to everyone outside the event horizon. That sounds downright indecent, and physicists recoil in horror at the very thought of it – as well they should – because if such a thing really happened, we could literally look into the future. The only problem with that is the future doesn't exist because information about the future hasn't been created yet. So according to the equations of spinning black holes, we could be looking at something that doesn't even exist. Does anyone besides me see a problem with that?

Kurt Gödel employed a spinning universe to create his time loops that permitted backward time travel, which of course violates causation and probably the second law of thermodynamics as well. Others used spinning cylinders in space to create similar time machines on paper. Now we learn that a spinning black hole might reveal a naked singularity that shows us the future. Let's face it: general relativity just doesn't handle the problem of spin well at all.

There is one theory, however, that does handle spin. In fact, spin forms the fundamental unit of measurement in that theory, known as Planck's constant. Of course I'm referring to quantum mechanics. What I'm saying is that maybe Einstein's field equations work well in some circumstances, but they break down in others, as in Schwarzschild's solutions. If there really are black holes, maybe these will have to be treated as quantum-mechanical objects instead of using Einstein's field equations, especially the spinning ones.

Now it may seem odd to imagine looking at huge objects, like black holes, as quantum-mechanical objects, which are normally thought of as very small. But there are laboratory situations where fairly large things behave quantum-mechanically. I'm referring to things like Josephson junctions and superfluid helium-4. Liquid helium-4 is a very unique substance. It doesn't solidify, even when temperatures approach absolute zero. Furthermore, since the helium-4 atom has two electrons, two protons, and two neutrons, the intrinsic spins of these particles add up to multiples of whole units of spin instead of multiples of half units. This makes the helium-4 atoms act as bosons instead of fermions, meaning all of them can occupy the same quantum-mechanical state simultneously, and they do this when the helium is cooled to a low enough temperature. A beaker full of superfluid helium-4 behaves almost like a single elementary particle. One way this manifests itself is by spinning the beaker. The superfluid inside the beaker won't begin spinning until the beaker reaches a certain critical speed of rotation. Then the fluid suddenly starts spinning as if it jumped into an excited quantum state.

Black holes aren't the only cosmological objects that may need a quantum-mechanical treatment in order to make sense out of them. Our entire universe might also qualify as a quantum-mechanical object. As I've said over and over again, general relativity is a wonderful theory when it's used in the appropriate context. I'm just not convinced that that context includes the entire universe. Scientists have invented a quantum theory that applies to small things and a theory of general relativity that works for larger things (at least some of the time). What science is lacking is a cosmological theory that encompasses everything, treating electrons, quarks, and the entire universe the same way.

It could be that such a cosmological theory is right around the corner. String theorists say they are on the verge of something huge. We'll see. It could be that such a theory is simply beyond human comprehension, so we'll have to settle for what we've got. Or it could be that if we discover The Thing That Explains All Things, it will be something that we just can't write down as mathematical equations.

25

Appendix I – The Cosmological Conundrum

Note: I had trouble deciding whether to even include a topic about cosmology. I hated to leave it out altogether because it's so darned interesting, plus it seems to have some relevance to the main topic. But it didn't seem to fit anywhere in the main part of the essay, plus it's pretty vague and speculative. So I finally decided just to stick it in at the end as an appendix.

Hard-core physics books are in the 530 series of the Dewey Decimal Classification. I've just about cleaned out that series from the public libraries in my area. Philosophy and metaphysics are way down in the 100 series, and I've dabbled around in that section quite a bit as well. Not very far from my beloved 530 series is a small section between 523.1 and 523.2, where the books on cosmology are found. Books by unfamiliar authors lurk around on those shelves along side works by renowned physicists like Stephen Hawking and Alan Guth. Cosmology is the place where physics and metaphysics meet. I like reading those books because even serious scientists like Hawking and Guth can let down their hair and get all wild and crazy. These books kind of remind me of science fiction.

I read a lot of science fiction books when I was a kid – before I matured and became a real scientist, er engineer. ^{3}^{6} Unfortunately, science fiction books don't even warrant a decent place in the Dewey Decimal Classification. They're consigned to the dreaded “FIC” section of the library. At least the “BIO” section is organized by the last name of the person the book is about. But everything in the “FIC” section is amorphous and lacks definition – nothing is arranged by subject, but only by the last names of the authors. A really cool sci-fi book might even be buried between a murder mystery and a romance novel. Yuck! That made it nearly impossible for me to find good science fiction books by simply browsing through library shelves.

The reason why cosmology earned its place in the Dewey Decimal Classification, whereas science fiction didn't, is because cosmology is supposedly based on good scientific theories whereas science fiction is based on … well … fiction. But here's the problem with this: scientific theories don't do an adequate job of supporting what cosmologists are trying to explain. Sure there's quantum physics and general relativity, one describing what goes on in the submicroscopic world, and the other explaining gravity a little better than Newton did. But that's it. I call this the cosmological conundrum. Even without much in the way of scientific theory, cosmologists can still spin a pretty good yarn. I'm going to paraphrase that story using the style of the science fiction genre.

About 13 or so billion years ago, a tiny speck of nothing exploded into a blazing super-hot speck of something. It started out as quantum soup where nothing made any sense – time and space were interchangeable and everything was pure energy. Left to its own devices, the universe might have stayed that way forever. It needed a jump start, because general relativity wasn't enough. You see, there was no gravity, just a single Force. Gravity, electromagnetism, and the two nuclear forces were like identical quadruplets that you just couldn't tell apart. Then gravity decided to split – it's been nature's problem child ever since. So gravity said, “¡Adiós amigos!” and there was this huge phase transition.

36

I wrote my own science fiction story in junior high school for an English assignment: An evil ruler reigns over the

Earth, which is on the brink of calamity and destruction. A fantastically wealthy scientist owns an observatory that has a

super telescope. He discovers a rich, verdant, and unpopulated planet orbiting a star about 500 light years away. He names the planet Arcadia and organizes a group of 100 or so enlightened individuals to construct a spaceship named The Adventure and travel to Arcadia, thus escaping the inevitable annihilation of Earth. (Up to this point my story followed

a very familiar plot line, but bear with it.) Okay, so the spaceship is completely life-sustaining, so the crew can survive

in space indefinitely. It takes 250 spaceship years for The Adventure to reach Arcadia traveling at 89% light speed, so about 6 generations of people are born, live, and die on the ship. The descendents of the original crew finally arrive at

Arcadia, but they find the planet is not only completely overrun by humans, but it's even more evil and in worse shape than the Earth was when their ancestors escaped. It turns out that shortly after The Adventure left, the evil ruler found

out about Arcadia and he sent 10,000 spaceships there to exploit its riches.

to Arcadia 50 years ahead of The Adventure. They set up evil mining colonies all over the planet, all controlled by the

evil ruler, and they were killing each other and destroying Arcadia when The Adventure arrived. The End.

Traveling at over 97% light speed, they got

26

This gave the universe the jump start it needed. Inflation kicked in right away, and boy did things change then. The entire universe went from a tiny seed smaller than a proton into a humongous space that stretched farther than we can see with our most powerful telescopes. The whole inflation event took less time than it takes light to travel from one side of an atom to the other, but it gave the universe plenty of elbow room and made space flatter than a pool table. Now you'd think that stretching something the size of a proton into trillions of cubic light years would make things pretty sparse, but energy kept filling up space as it inflated. Then just as suddenly as it started, inflation stopped, which is a good thing because I don't think humans could live in a universe that keeps inflating forever. At this point, general relativity thanked inflation for giving the universe a jump start and took over from there.

After inflation was finished, things were pretty darned hot. But thanks to general relativity, the universe just kept expanding and cooling down. It cooled enough for electromagnetism to split off from the two nuclear forces, and finally those split into a strong and a weak variety. The identical quadruplets had grown up, and they didn't even look like each other any more. Photons, electrons, and quarks condensed out of the quantum soup. Quarks can't stand living alone, so they quickly partnered up with each other to make protons, thanks to the strong force. The protons kept banging into each other because there were so many of them, and thanks to the weak force one of them would occasionally change into a neutron, sending a positron off to find an electron to annihilate. ^{3}^{7} The proton-neutron combinations got along very well, so they decided to stick together, calling themselves deuterons. Deuterons banged into each other and also with protons. Some of the deuterons stuck together, forming foursomes known as alphas. Things were going along just fine, with deuterons and alphas forming and everyone getting together like a great big party. But then the whole thing stopped. Things cooled down too much and it seemed like the strong and weak forces were finished.

The universe was now a pretty dull place. Sure there were electrons, protons, deuterons, alphas, and plenty of photons, but nothing really interesting ever happened. ^{3}^{8} It was kind of like sitting inside a neon bulb with the juice turned on. You couldn't see very far because the photons kept banging into the electrons – not that there was much to look at anyway; only red plasma that kept getting thinner and thinner – there was just this reddish glow everywhere you looked. Then a strange thing happened.

The electrons were feeling bored and lonely with all this space and nothing to do. They were also getting pretty tired of being pushed around by the photons. They couldn't get together because a) they were all negatively charged, which they found pretty repulsive, and b) they belonged to a secret society known as the Fermions, which have strict secrecy rules. One of them is called the Exclusion Principle, which prohibits members from getting into each others' spaces and learning each others' secrets. But then they came up with a great idea: why not team up with the protons, deuterons, and alphas? Not only were they positively charged, which the electrons found positively irresistible, but they also didn't have any problems sharing some empty space with electrons. It was a win-win situation.

So the electrons hired the law firm of Bohr Heisenberg & Pauli to draw up some contracts. The electrons were allowed to move in with the positively-charged particles as long as they stayed in their assigned shells and respected each others' privacy. After an electron moved in with a proton or a deuteron, they called themselves a hydrogen atom, and they found out they could even buddy up with another hydrogen atom to form a molecule. The alphas let two electrons move in and they called themselves helium, but the helium atoms couldn't form molecules because having two electrons as roommates was all the company the alphas could handle. So now there were hydrogen molecules and helium atoms and no electrons left over. All the charges canceled out and everyone was happy.

With no free electrons on the playground, the photons didn't have anyone to bully anymore. So they just zoomed off into oblivion, and the universe suddenly became transparent! Those original photons

37 According to Richard Feynman, sending off a positron is the same as absorbing an electron coming from the future. I kid you not.

38 There were also some kinky ménage à trois combinations with one proton and two neutrons and one neutron and two protons, but I feel uncomfortable talking about that. I won't mention them again.

27

are still around today, but the expansion of the universe made them lose most of their original energy. After starting out red hot at thousands of degrees, the universe has cooled off to a chilly 2.7°K. You can actually take the temperature of the universe by pointing a sensitive microwave receiver at it. ^{3}^{9} At first, scientists put the receivers in high-altitude balloons. Then they launched a couple of satellites named COBE, and these provided data that matched the cosmologists' predictions to an astounding degree of precision. So it looks like the universe really does have the right temperature, because if you work the equations backwards you find out that the temperature of the early universe was exactly as hot as it needed to be at the exact moment it became transparent. Not only that, but the pattern of the universe's temperature in the COBE data has just the right amount of graininess and stringiness.

The hydrogen molecules and helium atoms were thinning out pretty fast. Things were getting boring again. Even the photons were dying of boredom, going from red to infrared, and finally turning into dull microwaves. Then, without warning, some of the atoms started drifting together, pulled in by that bad boy gravity. The other kids (electromagnetism, the strong, and weak forces) had their fun, but now it was gravity's turn. Gravity didn't have much to do until now because it needs lumps in order to attract stuff. Without lumps, gravity just sits there. But for some strange reason, lumps just appeared out of nowhere. Cosmologists aren't certain, but they suspect the lumps were caused by unseen matter from dark nether regions, so they call it dark matter. Now that was sure a surprise because up until now the hydrogen, helium and photons thought they were alone, but now it turns out they had lots of company. Their dark neighbors were friendly with gravity, but they ignored his three sibling forces. They formed stringy lumps so gravity could begin attracting stuff. Today those stringy lumps can be seen as those grainy patches in the temperature pictures taken by the COBE satellites.

Eventually, enough atoms got pulled together by the stringy lumps that they started heating up again! Gravity was behaving like a teenager on testosterone overload, collapsing giant balls of gas into roiling infernos. Things were getting out of hand until gravity's siblings, the strong and weak forces, stepped in. Everyone thought the strong and weak forces were history, but they were waiting in the wings all along for the opportunity to get back into action. The hydrogen atoms minus their electrons started making alphas again, which temporarily stopped the gas balls from collapsing, but that wasn't the end of it. Gravity was in charge now; no more Mr. Nice Guy. It could even bend space and time! The strong and weak forces tried to stop their evil sibling, making heavier and heavier elements all the way up to iron. At that point, the strong and weak forces could do no more and they threw in the towel. Some of the giant iron gas balls blew up, scattering debris all over the place while gravity laughed hysterically. Dark matter frowned, wondering if it had made a terrible mistake. Maybe its younger sibling, dark energy, could be summoned from the nether regions to stop this madness

The rest of the story gets pretty repetitive: more stars, galaxies, planets, life, intelligence, yada, yada, yada. But the bottom line is the universe became a very exciting and chaotic place!

The End.

39 Here's where the story about Arno Penzias and Robert Wilson comes in. They were two radio engineers from Bell Labs who were fooling around with a large microwave horn antenna, and they kept picking up static. Well, duh. When I was

a kid, I built short-wave radio receivers. Not from a kit, mind you. From schematic drawings and a bunch of parts I

bought at the local Lafayette Radio Electronics store after saving up my allowance, which was 25¢ per week at the time.

I built a single-tube regenerative receiver using a 1LE3 triode powered by a 45-volt battery, and I even wound my own

tuning coils. I literally spent an entire summer in my bedroom listening to ham radio operators from as far away as Australia (I lived in New Jersey). My dad started calling me “Marconi” after the famous guy who invented radio. I don't know if Dad gave me that nickname as a compliment or he just thought he was being funny. Either way, I heard an awful lot of static in my headphones that summer, but I just figured it was par for the course. Not Penzias and Wilson. They were really bothered by static and they tried everything in their power to eliminate it. They blamed it on pigeon droppings – it never even occurred to me that bird droppings on my antenna could be making the static – so Penzias and Wilson scrubbed down their antenna with soap and water. The static persisted, but their fastidiousness certainly paid off, because then they realized that the static was coming from outer space, which won them the Nobel Prize. I'm pretty sure my 1LE3 triode tube picked up some static coming from outer space too, but nobody thought I deserved the Nobel Prize for that. Not even Dad.

28

Appendix J – Can Dark Matter Really Form Halos?

According to the prevailing cosmological theories, there is five times as much “dark matter” (DM) as ordinary matter in the universe. All galaxies, including our own Milky Way, supposedly formed within primordial DM “halos.” Scientists believe that DM spherical halos are also required to account for anomalous orbital velocities in the outer regions of spiral arms, which tail off much more slowly than predicted by the standard laws of gravity. Moreover, the distribution of DM particles within these proposed spherical halos must be fine-tuned in order to produce the desired gravitational effects that would produce those anomalous observations. DM only interacts with ordinary matter through gravitation; whatever DM particles are, they do not interact through electromagnetic, weak, or strong nuclear forces – the only forces besides gravity that are known to exist. In my view, the lack of particle- to-particle interactions raises a very serious question about whether DM halos could be formed.

I'm just a simple engineer, so I looked at this question by comparing the formation of a hypothetical DM halo against a toy model of a cloud of ordinary gas molecules using Newtonian physics. The figure below shows an inner sphere of gas having a radius, r, and mass, M, surrounded by a thin shell of gas having mass, Δm _{i} . There are many other layers of gas surrounding the first shell that are not shown.

As far as the shell is concerned, 100% of the gravitational force it experiences comes from the mass of the inner sphere as if M were concentrated at a point in the center, marked x. Let ρ _{i} be the density of the material in the shell. The incremental gravitational force, ΔF _{i} , pulling the shell inward is then given by Newton's law of gravitation:

ΔF _{i} = G × M × Δm _{i} / r ^{2} = G × M × (4π r ^{2} ρ _{i} Δr) / r ^{2} =

G M 4π ρ _{i} Δr

The inward gravitational force on the shell applies an incremental pressure that results in a decreasing total pressure as r increases: ΔP _{i} = – ΔF _{i} ÷ area of the sphere:

ΔP _{i} = – ΔF _{i} / 4π r ^{2} = – G M ρ _{i} Δr / r ^{2}

M = 4π _{Σ} ρ _{j} r ^{2} Δr

j =1, 2, … , i - 1

The total inward pressure applied on the sphere is the sum of the incremental pressures from all the shells surrounding the sphere: P = – Σ ΔP _{i} . Of course, the gas in the sphere resists being compressed, and it exerts an outward pressure that exactly balances the total inward pressure. The key principle is this: The inward pressure from the outer layers keeps the gas in the central sphere from expanding, and the pressure from the inner sphere keeps the gas in the outer layers from collapsing inward. Thus, a spherical cloud made of ordinary matter can build up and be contained by gravity alone; it will be stable because internal pressures are in perfect balance with the gravitational forces. ^{4}^{0}

40 By including the ideal gas law, Pi = ρi R T, you could make a differential equation that solves for the density of the cloud as a function of distance from the center, ρ(r), but that is way beyond what I want to discuss here.

29

Now let's compare ordinary gas to a hypothetical halo of DM. Pressure in an ordinary gas comes from collisions between gas molecules, which mainly involve repulsive electromagnetic forces between the electron shells of colliding atoms. As stated previously, DM does not interact through electromagnetic, weak or strong nuclear forces. Thus, there is no pressure inside a hypothetical DM halo; it therefore must consist entirely of DM particles that are in orbit. But exactly what are they orbiting?

You might want to think that DM particles orbit around the halo's center of mass. But consider a DM particle very near that center of mass. It feels no gravity at all because all of the halo's mass is in concentric shells that surround it on all sides. ^{4}^{1} With no gravity at the center, the DM particle is free to migrate outward. Since this applies to every DM particle near the center, a central void would start growing as more DM particles leak away. Of course DM particles could wander back into the void, but they would experience no gravitational force pulling them inward so they would eventually drift away again. This situation is equivalent to countless “planets” in a “solar system” without any “sun” in the center to keep them in orbit. ^{4}^{2} With little or no gravitational field near the center and no internal pressure, there is nothing to hold the halo together. Even if a DM halo could be formed artificially, it would immediately start to disintegrate from the middle out. Gravitationally-bound DM systems are completely unstable, and therefore halos naturally formed entirely of DM are not possible.

In contrast, ordinary gas in the center of a cloud will experience inward pressure from all the gas layers that surround it, thus preventing any voids from forming. The gas confined in the center forms the gravitational nucleus that pulls all outer layers inward, each subsequent layer increasing both the volume of the sphere and its gravitational mass, M, attracting additional layers of gas. The balance between internal pressure and the inward gravitational forces allows the formation of a stable spherical structure that is contained by gravity alone. This is impossible without internal pressure. Perhaps DM particles actually do experience some kind of internal pressure by interacting with each other through

some new, undiscovered force of nature – not electromagnetic, weak, or strong nuclear forces. ^{4}^{3} this would require rewriting much of fundamental physics as we know it.

But

There are other problems with DM besides the inherent problem of forming stable DM halos. With the discovery of the Higgs particle, the standard model (SM) of particle physics is now complete. Extra DM particles that clutter up the SM landscape would be most unwelcome. It is hoped that supersymmetry (SUSY) will come to the rescue by providing stable “super partners” of the existing standard-model particles that can serve as stand-ins for DM. Unfortunately, the initial runs of the large hadron collider (LHC) showed no evidence of those hoped-for super partners. The next set of LHC runs will take place in 2015 at much higher energies. If those runs still produce no super partner candidates, SUSY will be in big trouble and physicists may be forced to rethink or abandon this model altogether.

Without a viable DM halo structure or any evidence of DM particles themselves, another way to resolve

the observed gravitational anomalies is to modify general relativity. As I stated repeatedly in this essay, there are problems with GR because it permits solutions that violate its own first principles. GR is an approximation of a more complete theory of gravity, space and time, just as Newtonian gravity is an approximation of GR. GR may be a very good approximation when gravitational fields aren't too strong

or distances too large, but it starts to fall apart when those conditions no longer apply.

have developed theories known as modified Newtonian dynamics (MOND) that provide some semblance of agreement between theory and observations without having to drag problematic dark matter into the existing theory of gravity. From what I've read – and I admit I don't completely understand much of what was written – it seems that the current MOND proposals are all somewhat ad hoc and a bit too a posteriori (“curve-fitty”) to suit me. I think a radically different bottom-up approach is needed that allows gravity to emerge from a whole new set of first principles, and this new approach may also require science to modify its current understanding about the nature of space and time.

Some theorists

41 It's easy to show from Newtonian physics that an object inside a hollow sphere will feel no gravitational force from the matter in the surrounding sphere.

42 I'm excluding a central black hole because according to DM theory, DM halos formed before black holes existed.

43 Particle-to-particle interactions through gravity alone would be far too weak to generate pressure.

30

Appendix K – There's Trouble on the Horizon (Part II)

Black holes have become a very popular plaything for astrophysicists, cosmologists and lately, particle/string theorists like Leonard Susskind. It seems that almost anything in physics nowadays can somehow be connected to a black hole. For example, Lisa Randall gave a talk recently where she said she was really excited about the possibility seeing extra-dimensional black holes in the TeV range emerging from the strong brane into the weak brane through a warped 5 ^{t}^{h} dimension, after the proton beam energies in the Large Hadron Collider are sufficiently ramped up. ^{4}^{4} But are black holes real or just a figment of the imagination? I've done a little more digging on this topic and came up with some relevant historical facts:

• 1915 – Albert Einstein published his theory of general relativity

• 1915 – Karl Schwarzschild came up with the first exact solution to Einstein's field equations, involving a single spherical non-rotating mass. ^{4}^{5} His solution revealed the fact that a sufficiently large mass would collapse behind a singularity radius, called the “event horizon.”

• 1939 – Albert Einstein published a paper entitled “On A Stationary System With Spherical Symmetry Consisting of Many Gravitating Masses.” ^{4}^{6} Basically, he said a sphere of gravitating masses could never achieve a radius less than (2 + √3) times the singularity radius; i.e, a black hole could never form. It should be noted that Einstein was no slouch when it came to interpreting the theory of general relativity he invented, unlike modern apologists who use hand- waving arguments like “pathological coordinate systems” to blow away the singularity problem and the obvious paradoxes it raises. ^{4}^{7}

• 1939 – J. Robert Oppenheimer and H. Snyder published a paper entitled “On Continued Gravitational Contraction.” They concluded that, yes, it is possible to form a black hole, but it would take an infinite amount of time to complete the task due to time dilation. In the meantime, the collapsing object will completely evaporate into space.

So the issue of black holes should have been put to rest in 1939. But what happened next? Kip Thorne, Stephen Hawking, and a whole new generation of brilliant physicists resurrected black holes in the 1960s and early 1970s. I honestly don't know what motivated this. Ironically, some of their early papers on black holes actually referenced Einstein's 1939 paper. Did they even read it? In any case, the flood gates opened; thousands of papers were written and entire careers were built on what had been previously shown to be a fallacy. Hawking and Jacob Beckenstein blended black holes with thermodynamics and information theory, while others speculated about black holes as being portals to wormholes and extra dimensions. Of course, all of this was grist for numerous science fiction ^{4}^{8} books, movies, and the television series “Stargate.” The dying field of cosmology was rejuvinated by black holes. Anything that was unexplainable by ordinary physics could be easily and effortlessly explained with the help of black holes. Although nobody has examined any black holes up close, astrophysicists identified a large number of very distant objects as being “black hole candidates” or BHCs.

Meanwhile, the inevitable contradictions and paradoxes concerning black holes kept piling up and were swept under the carpet until 2012 when a genuine “crisis” erupted after a paper was published by Ahmed Almheiri, Donald Marolf, Joseph Polchinski, and James Sully (AMPS). The authors considered

44 At least that's what I think she said, because her talk was rather incoherent. Maybe I'm just too “weak-braned” (pun intended) to wrap my head around those extra warped dimensions, but to me it sounds like people are speaking in tongues when discussing such things.

45 The fact that Schwarzschild accomplished this feat so soon after Einstein published general relativity is amazing enough. What made this even more impressive is that he did this during WWI on the Eastern Front while he was under enemy fire and suffering from a fatal skin disease.

46 Einstein was always very polite and reserved. Even today, he probably would be too circumspect to entitle his paper “Black Holes Are Completely Bogus,” although that's pretty much what he said in the paper.

47 I realize that appealing to authority is a classic logical fallacy, but I can't help feeling that Einstein knew a lot more about the nuances of general relativity than any physicists living today.

48 Physicists often forget that the operative word here is fiction.

31

a typical Alice & Bob thought experiment with Alice falling into a black hole, and reached a frightening conclusion: When Alice becomes maximally entangled with the Hawking radiation emitted at the event horizon, there will be dire consequnces for the vacuum if the black hole has reached half of its lifespan. ^{4}^{9} I guess when particle physicists aren't able to come to terms with the paradox of Schrödinger's cat, they tend to see maximal quantum entanglement everywhere. This kind of reasoning is referred to as Maslow's Hammer ^{5}^{0} .

Since the 2012 bombshell, there have been numerous papers written, conferences assembled, and lectures given about AMPS, without much resolution to the paradox. One string physicist ^{5}^{1} proclaimed that AMPS revealed something that is very, very deep and that when the paradox is finally solved, it will usher in a golden age of physics – along with a theory of everything based on string theory, of course. The only thing this latest paradox reveals to me is that black holes can't truly exist.

Recently, Abhas Mitra and Laura Mersini-Houghton published independent papers that disproved the existence of black holes. Mersini-Houghton's argument is that Hawking radiation makes a collapsing star explode before it can form a black hole, which I find kind of weak because it's too mechanistic. I prefer Mitra's argument, which addresses Schwarzschild's mathematical solution directly, showing that using proper boundary conditions, this solution requires that the mass of the black hole is zero. The end result is similar to Oppenheimer's 1939 paper: a collapsing object will asymptotically approach the size of the Schwarzschild radius while it radiates away mass-energy. Over a very long time, both the collapsing object and the event horizon beneath it will shrink until the object's mass equals zero; the final result is the zero-mass black hole that is allowed from Mitra's solution. He calls these objects “eternally-collapsing objects” (ECOs), and large magnetic fields are among their hallmarks. It turns out that quite a few BHCs in the astrophysicists' collection are associated with stupendous magnetic fields, despite the fact that general relativity strictly forbids a black hole from having any magnetic field at all. Being an expert in general relativity while at the same time believing that BHCs truly are black holes is an interesting case of cognitive dissonance.

Engineers frequently use mathematical “laws” that take the form of quadradic equations that generally have two solutions. One of them usually reflects physical reality while the other is utter nonsense. ^{5}^{2} What we do in those cases is to keep the physical solution and simply throw away the other one. Unlike engineers, physicists tend to take quite literally anything an equation can dish out. I believe this blind faith in mathematics started back in 1928 when Paul Dirac predicted the existence of an anti-particle from mathematical solutions that described both positively- and negatively-charged electrons. As an engineer, I'd probably throw away the solution describing the positively-charged electron, but Dirac kept it. Luckily for him, the positron was discovered experimentally a couple of years later. ^{5}^{3}

I think it's important to remember that mathematical equations aren't really physical laws; they simply describe behaviors that emerge in systems as they become organized. I think the true physical laws are hidden fundamental principles that are much simpler than the behaviors that emerge from them. Being exact and repeatable, a mathematical model may resemble a law, but it's a mistake to conflate the two. The case in point is taking the theory of general relativity – a model that has been confirmed only on the scale of the solar system and under conditions of relatively weak gravitation – and applying it to cosmological scales and conditions of extremely strong gravitation. This is a common error that occurs when using reductionist methods and logic.

49 I could never grasp how a full-sized human being can attain maximal quantum entanglement with elementary particles. Nevertheless, it appears the vacuum still remains intact and everything is cool so long as Alice falls through the looking glass before the the black hole has had its critical midlife crisis. At least I think that's how it's supposed to work.

50 This is attributed to the observation made by Abraham Maslow in 1966, which possibly inspired the song “Maxwell's Silver Hammer” by The Beatles. A different Abraham, whose surname was Kaplan, made a similar observation in 1964:

“Give a small boy a hammer, and he will find that everything he encounters needs pounding.”

51 This person also believes in wormholes. I won't reveal his name in order to spare him the embarrassment.

52 Examples would include negative surface areas and imaginary lengths.

53 Confirmation of Dirac's prediction probably had a lot to do with his winning the 1933 Nobel Prize. It's hard to say what might have happened if the positron prediction had turned out to be false.

32

Appendix L – The Einstein-Rosen Wormhole Fantasy

There is a lot of strange stuff being taught in the physics departments of major universities these days, and some of it is showing up in Wikipedia. Recently, I watched a video lecture given by a famous physics professor, who claimed that the Einstein-Podolsky-Rosen (EPR) paper was somehow equivalent to the Einstein-Rosen (ER) paper written in the same year. Or to use his terminology, ER=EPR. As you may recall, the EPR paper introduced a pair of entangled systems in a thought experiment that challenged the Copenhagen interpretation of quantum physics, which says that quantum properties don't exist until they are measured. According to popular science talks shown on YouTube, Einstein and Rosen “discovered” wormholes, and the clever formula ER=EPR means that wormholes (a.k.a. Einstein-Rosen bridges) connect entangled particles. Well, I went to the trouble of downloading a copy of the ER paper ^{5}^{4} and read it, and I found out that Einstein and Rosen discovered no such thing.

The paper asked the question whether particle physics could be unified with general relativity. In 1935, there were four known elementary particles, the electron, proton, neutron and positron, and two known forces, electromagnetism and gravity. ER propsed that an exact solution to the field equations of general relativity – the Schwarzschild solution – might be combined with Maxwell's equations of electromagnetism in a way that would be equivalent to the known particles. In other words, this was a very early attempt to come up with a Theory of Everything, based on what little was known in 1935.

Einstein and Rosen used the following form of the Schwarzschild solution.

ds ^{2} = – dr ^{2} / (1 – 2m/r) – r ^{2} (dθ ^{2} + sin ^{2} θ dφ ^{2} ) + (1 – 2m/r) dt ^{2}

Here, r is the distance from the center of the “particle” coordinate system that is being modeled. There are two singularities, at r = 0 and at r = 2m. Using the change of variables u ^{2} = r – 2m gets rid of both singularities and converts the equation to the following form.

ds ^{2} = – 4 (u ^{2} + 2m) du ^{2} – (u ^{2} + 2m) ^{2} (dθ ^{2} + sin ^{2} θ dφ ^{2} ) + u ^{2} dt ^{2} / (u ^{2} + 2m)

“As u varies from - ∞ to ∞, r varies from + ∞ to 2m and then again from 2m to + ∞. If one tries to interpret the regular solution [in the first equation] in the space of r, θ, φ, and t, one arrives at the following conclusion. The four-dimensional space is described mathematically by two congruent parts or 'sheets,' corresponding to u > 0 and u < 0 which are joined by a hyperplane r =2m or u = 0 in which g vanishes. We call such a connection between the sheets as a 'bridge' [italics added].”

The above interpretation from Einstein and Rosen is diagrammed below.

The operative word they used to describe these sheets is congruent, meaning the same. The bridge does not connect two different universes, different parts of our universe, or even two entangled particles, and the vertical direction shown is not a 5 ^{t}^{h} dimension. The two sheets are of the same four-dimensional space, mathematically split into two sheets with no singularity at r = 2m. With u ^{2} substituted as the new variable in the equation, it makes no difference whether u is negative or positive; hence, the two sheets are duplicates of each other; anything that “happens” in one sheet would also be expected to occur in the other sheet. The paper went on to add electric charges to the equation, but nowhere is there any mention of extra dimensions, wormholes, or quantum entanglement. Equating space-time geometry to elementary particles never panned out. Although Einstein's ashes were scattered, physicists will exhume him metaphorically whenever they need to authenticate some new wormhole theory.

54 “The Particle Problem in the General Theory of Relativity” Physical Review Vol. 48, Page 73 July 1, 1935.

33

Appendix M – Beyond Belief

Science is the method of discovering the truth about the physical universe as being part of general knowlege. It's instructive to note the difference between knowledge and beliefs. Beliefs form our

attitudes, behaviors, and the things we say in public. Belief is not the same as knowledge.

beliefs should be guided by knowledge, we believe some things we really aren't certain are true.

Although

An ideal belief system is modeled using the following Venn diagram.

The gold area represents truth and knowledge; the set of things that are known as facts. Facts may be self-evident, such as the logic statments A = A and A ≠ ¬A; they may be learned from direct experience; or they may be proven by applying deductive or inductive reasoning along with the scientific method. The brown area represents the set of things that are known to be false, including hypotheses that are contradicted by evidence plus statements that are prima facie false, e.g. A = ¬A. It is also reasonable to catagorize ideas as false if they lack both logical necessity and evidence of being true. Purple unicorns would fall into this category. The blue circle represents a set of beliefs, and ideally, it should be highly aligned with the gold disk. The circle of beliefs may include a small amount of speculation in the white area that surrounds the truth; however, it should deinitely exclude any falsehoods in the brown area.

Belief systems can sometimes be poorly aligned with the truth, as depicted below.

The circle of beliefs has shifted to the right, and now includes more unproven speculation in the white area, as well as some falsehoods in the brown area. Some truths in the gold area are also excluded from the circle of beliefs. This belief system represents what is commonly known as delusion, defined as holding a strong belief despite superior evidence to the contrary.

When delusion occurs in an individual, it's often caused by a brain pathology, such as schizophrenia, and not because of a lack of intelligence. Closed groups are extremely prone to having delusional beliefs ^{5}^{5} – “groupthink” is a popular term describing a dysfunctional collective belief system – and clear evidence of it can be observed within the scientific community today, especially in physics. Physicists don't produce anything tangible, so long-term survival for them depends on obtaining scarce research grants and securing tenured university professorships. Competition among physicists is extreme, with constant pressure to get their papers published ^{5}^{6} and having those papers cited by other physicists, so anyone who openly questions the prevailing beliefs of the group would be committing professional suicide. These factors plus the hierarchical structure of the scientific community in general ^{5}^{7} have allowed cosmology and particle physics to acquire certain aspects of science fiction.

55 Especially in cults like Marshall Applewhite's Heaven's Gate and to a lesser extent within a corporate culture.

56 Major scientific journals employ an insidious anonymous peer review process that is extremely effective in screening out any inconvenient facts that suggest that a previous Nobel Prize may have been awarded based on an invalid idea.

57 Its leaders have true rock-star status and are frequently seen promulgating their latest fantasies on TV and YouTube.

34

Appendix N – Common Scientific Blunders

While pondering the question of why science is not solving the reality riddle, it occurred to me that the difficulty involves a set of common scientific blunders. Here are a few of them.

1. Using classical models to explain quantum phenomena

The origin of string theory is a perfect example of this blunder. In 1968, Gabriele Veneziano discovered that the scattering amplitudes of strongly-interacting mesons were described by the Euler gamma function. This was yet another example of a physical “law” that describes a behavior perfectly without actually explaining it. When mathematical laws don't adequately explain things, scientists typically try to construct classical models that fill in the blanks. Leonard Susskind, who was a young associate professor at Yeshiva University at the time, looked for a physical mechanism that would explain the observed scattering amplitudes. He miraculously came up with a model of the meson consisting of two electrically charged particles attached to the ends of a tiny string. Susskind was able to prove that if two such two such objects collide with each other and scatter, the amplitudes would match Veneziano's formula exactly. He had apparently unlocked the secret of why Veneziano's theory worked, and physicists everywhere began to model everything in the universe from soup to nuts using tiny vibrating strings. Thus, string theory was born and Susskind was thereafter known as its father. I don't deny that Susskind's coming up with a model made out of string that closely matched meson scattering amplitudes was a very impressive intellectual tour de force. But as far as I'm concerned, it was a completely pointless exercise that added nothing to the understanding of why mesons scatter that way. The fallacy he committed was conflating “as if” with “is.” The fact that mesons, which are quantum objects, scatter as if they were tiny classical pieces of string with electrical charges attached to the ends, does not mean meson actually are a tiny classical pieces of string with electrical charges attached to the ends. Although Susskind's model was able to duplicate scattering amplitudes, it wasn't very good at doing much else. Quantum interactions are inexplicable , and they cannot be properly “understood” by using inappropriate classical models. Quantum mechanics simply is what it is. Albert Einstein could never come to terms with this fact, and it seems many scientists have trouble accepting it even today.

2. Equating organization with information

Humans have a natural tendency to value things that are new and shiny and debase things that are old and decrepit. This leads to the fallacy that order and organization equals information. For example, most people (including scientists) would jump to the conclusion that a book is rich in information simply because it is highly organized and has meaning. They would conclude that incinerating a book in a furnace would completely destroy all the information the book contains. In fact, the quantity of information has nothing to do with the content or how it is organized. The smoke, ashes, and gas that the burning book generates actually contain more information than the book itself did before it was set on fire. The fact we cannot decipher that information doesn't matter. There is a fundamental quantum principle that once information is created it can never be destroyed – period. ^{5}^{8}

3. Confusing entropy with heat

The temperature, pressure, enthalpy and entropy were very well-known by mechanical engineers in the Steam Age. They meticulously measured those properties and compiled and published that data in steam tables. Unfortunately, entropy was thereafter always associated with heat. While it is true that entropy, S, is related to heat, Q, and temperature, T, in the equation dS = dQ/T, entropy is much more closely related to information than it is to heat. Ludwig Boltzmann defined entropy as being proportional to the logarithm of degrees of freedom, ^{5}^{9} being essentially the same as Claude Shannon's definition of information. Nevertheless, most people today, including physicists, associate entropy with heat, disorder, wasted energy, and lost opportunity for doing useful work – in other words, entropy is

58 Leonard Susskind refers to this as “The Minus First Law of Physics.” In other words, the non-destructibility of information is so basic to quantum physics that it precedes and supersedes all other laws. It also happens to be the basis of the second law of thermodynamics.

59 This was an amazing accomplishment, considering that classical physics was all Boltzmann had to work with.

35

associated with a lot of bad things. On the other hand, living in the Information Age, we associate information with “good” things like texting and Facebook. Consequently, it's difficult for people to accept the fact that entropy and information are really the same thing.

4. Mistaking the Second Law of Thermodynamics as a mere statistical byproduct

I watched a lecture by a well-known physicist, who made the astounding claim that all physics is

reversible. He used the example of planets revolving around the Sun, where it is impossible to tell if the planets revolve counter-clockwise with time moving forward or revolve clockwise with time running backward. Because quantum processes similarly work in both directions, some scientists mistakenly

conclude that everything is reversible. A classic thought experiment reveals this misconception:

Imagine a room with a bottle of perfume in it. When the bottle is opened, the perfume evaporates and eventually the perfume molecules are uniformly dispersed. As perfume fills the room, the entropy of the room increases according to Boltzmann's principle. According to the Second Law of Thermodynamics, the entropy cannot decrease spontaneously, but according to the myth of “all physics is reversible,” if we are very patient and wait a really, really long time, all of the dispersed perfume molecules would find their way back into the perfume bottle and condense back into a liquid. In other words, it's not really impossible for entropy to decrease, it's just highly improbable. The Second Law isn't really a law at all – it's merely a statistical byproduct of random reversible molecular motions. Well, that's just wrong. In order for the perfume to spontaneously go back into the bottle, entropy would have to decrease. Since entropy equals information, this requires the destruction of information, which clearly violates Susskind's Minus First Law. No, the Second Law is not just some statistical byproduct that can be overridden given enough time and patience. It's the most fundamental law there is.

This blunder is related to Blunder #1, above. Scientists habitually use classical toy models in an attempt to explain quantum mechanical processes that are fundamentally unexplainable. Robert Boyle's gas model uses perfectly elastic billiard balls as molecules. If gas molecules truly were tiny, perfectly elastic billiard balls, then the process of perfume dispersion would be completely reversible. But gas molecules aren't tiny billiard-balls ^{6}^{0} and interactions between molecules are based on quantum mechanics, not classical physics. The only way the dispersion of perfume molecules can be reversed is by reversing time. Except for some very special, simple cases, physics is fundamentally irreversible, information is never destroyed, entropy never decreases, and time never goes in reverse.

5. Thinking of the universe as having an outside

Everything we observe in the universe has a border with an inside and outside, and since everything in the universe has a border, we slip into thinking of the universe as having a border also. This leads to the common blunder of trying to conceptualize a model of the universe from the exterior, which is an impossible God's eye perspective of reality.

I watched Brian Greene giving a lecture about the universe, where he talked about three-dimensional

space going on and on forever without end. Well, Brian, there's a problem with that. The only possible way you can travel in a straight line forever is if the universe is infinite. The problem with an infinite universe is that it contradicts the big bang theory, which says the universe was initially smaller than a proton until inflation expanded it to the size of a grapefruit; then it just kept expanding. As far as I can tell, a proton and a grapefruit are both finite. So how did the universe get to be infinite and when did that happen? But if the universe is finite and you keep traveling in a straight line, you'll hit the edge. The problem is that universe doesn't have an edge. If it did, what would be on the other side?

The universe is unique because it's the only object that has neither spatial or a temporal edges or borders. This sets up the classic subject-object problem of “observing” something from the inside. There are no external measuring sticks or clocks that you can use to measure it spatially or temporally, but despite this obvious fact, cosmologists go right ahead and try to make those measurements anyway.

One of the big mysteries that deeply troubles cosmologists is why the universe seems to be so darned

60 Perfectly elastic billiard balls only exist in Plato's world of metaphysics. Real billiard balls collide irreversibly.

36

flat. They call it the Flatness Problem, and this is one of the reasons Alan Guth invented inflation. Well, I would ask, why wouldn't it seem to be flat? You see, there is no standard straight edge that exists outside the universe that can measure flatness on the inside, so all we can do is measure the universe on the inside using light rays. By definition, light travels in straight lines, ^{6}^{1} so yes, using light rays makes the universe seem flat from every vantage point. But bear in mind it's true only because we assume light rays travel in straight lines. Everything in the universe can only be measured relative to something else in the universe. If we use curved yardsticks as references, curved objects would look straight. How could we ever tell whether anything is truly straight or curved?

This perception of universal flatness comes from the diorama fallacy. A diorama is a three-dimensional miniature model. When I was a kid in junior high school, I made a diorama of the solar system for a science fair project. I used a large cardboard box with one of the sides cut out and suspended the Sun and planets ^{6}^{2} from the top with string. The inside of the box was lined with black construction paper with stars painted on it. I thought it was pretty cool. The point I'm trying to make is that scientists tend to think of the universe as if it's a three-dimensional miniature model – a diorama. It's understandable that a 7 ^{t}^{h} grader could conceive of a model universe that fits into a three-dimensional box because he would think of the universe the same as everyday objects. Adult cosmologists should know better.

The illusion that light travels in straight lines seriously distorts cosmological observations. Suppose we observe two quasars 12 billion light years from Earth, separated by 180° in the sky. We would naturally think of the two quasars as being separated by 24 billion light years. The problem is we are seeing both of those quasars from the past, not in the present. The present only really exists here where we are. Those quasars are from when the universe was 12 billion years younger, and the universe was much more compact then than it is today, assuming the standard cosmological model is correct. This means the two quasars visible now could not have been 24 billion light years apart when their light was emitted – what we see doesn't match reality. Seeing the entire universe as flat and three-dimensional is an illusion, and this causes us to greatly miscalculate distances between very distant objects and thus completely distorts our perception of the universe. The sketch below schematically illustrates the cosmic distortion. Space is shown horizontally and time vertically. The dotted lines indicate a large apparent separation between two very distant stars, based on light traveling in straight lines in flat space along an observer's light cone. The solid red lines are the light paths from the stars with a much smaller actual separation in the past, the curvature due to an expanding universe. The solid light paths and the observer's light cone converge in the present, creating the illusion of a much larger separation.

The universe is far stranger than any human being can possibly conceive. The fact is that nobody – including a cosmologist – can comprehend the universe without trying to mentally step outside and observe it, but stepping outside the universe simply is not possible even in principle. Since true science is based on observation, the inability to observe the universe renders cosmology as non-science.

61 Light rays do curve near a large gravitating mass like the Sun; however, they only curve relative to other light rays that are far away from the gravitating mass, which we define as traveling in straight lines. Everything is relative inside the universe; there are no absolute measurements.

62 I used a painted tennis ball for the Sun and ping-pong balls, marbles, and little wads of clay for the planets. Obviously, my diorama of the solar system was not built to scale.

37

Appendix O – The Headless Way and Wheeler's Big U

In Appendix N, Blunder #5, I showed that although cosmologists are (or should be) aware that we live in a dynamic, expanding universe, they tend to revert to using static, flat, three dimensional diorama- like models to describe it. Consequently, the current cosmological standard model isn't really much of an improvement over the one from the Middle Ages, depicted below. ^{6}^{3}

Our medieval friend is shown peeking under the firmament to get a better God's eye view of the cosmos. We laugh at this primitive naivete, but modern people, including scientists, also slip into committing this blunder. I'm convinced think humans are just not capable of conceptualizing the universe, and it all boils down to the subject-object problem. In order to make an observation, the subject must be separated from its object. That is not a problem for most objects because they all have exteriors. But the universe doesn't have an exterior, so there is no way to conceptually separate ourselves from the universe. The subject-object problem makes it impossible to form any consistent hypothesis of the universe. In fact, may lead to a far more radical conclusion that the thing we refer to as “the universe” may not really exist. ^{6}^{4}

Amanda Gefter's book “Trespassing on Einstein's Lawn” describes her quest to discover what is ultimately real based on observation. She used invariance as her criterion for deciding whether things are real. She and her father compiled a list of candidates of invariant things on a table napkin while eating in a Chinese restaurant. By doing exhaustive research that involved attending lectures and interviewing a number of notable scientists, ^{6}^{5} she was eventually able to cross everything off her list. She concluded that since every observation is dependent on the observer's particular frame of reference, nothing could satisfy her invariance criterion, and therefore nothing is real.

Douglas Harding had explored this problem in detail by paying very close attention to the spatial

aspects of observations and meticulously recording what he observed. He was strongly influenced by the relativity principle that states space and time are observer-dependent and not absolute, so he tried to

discover who or what the observer ultimately is.

(shown on the following page) drawn by none other than Ernst Mach. If you remember, Mach was a harsh critic of Isaac Newton and a forerunner of Albert Einstein in the development of special relativity. This self portrait was drawn from a peculiar perspective of zero distance, and it includes both feet and legs, a trunk, both arms and hands, and part of a nose, eye socket and mustache, but no head. Harding realized that Mach drew his portrait this way because his self was located inside his head and the self cannot observe the self. This inspired Harding to develop The Headless Way, which involved a series of experiments and exercises that force a person's attention to turn inward. Ultimately, a practitioner of

Harding stumbled on an amazing “self portrait”

63 The artist is unknown, but the drawing first appeared in Camille Flammarion's book “The Atmosphere: Popular Meteorology" in 1888, so it became known as Flammarion's engraving. The caption reads, “A missionary of the Middle ”

64 While it sounds strange, this conclusion is consistent with the Copenhagen interpretation of quantum mechanics, which holds that something does not exist until (or unless) it is observed.

65 Some of these folks are mentioned throughout this essay.

Ages tells that he had found the point where the sky and the Earth touch

38

this technique comes to the realization that the self does not exist. Eckhart Tolle also prescribes certain mental practices and meditations that produce the same result. The reason I broached Gefter, Harding, Mach, and Tolle is because their ideas support the principle that unobservable things don't exist.

So what does exist? That brings us to another interesting thought experiment. Imagine we look back through space and time using a very powerful telescope that is able to observe the big bang. ^{6}^{6} Since the big bang is the origin of everything in the universe, the telescope provides us the ability – in principle at least – to see our own origin, which is very weird indeed. John Wheeler pondered this and concluded that the universe must be a kind of giant feedback loop, depicted below. He called it The Big U.

A conscious being in the present is represented by the big eyball looking back at its own origin. The universe begins at the big bang singularity and evolves and expands into the letter U, while the light path from the origin to the eyeball goes directly along the straight dashed line. Wheeler could not draw this as the typical false diorama of flat space, because the observer's light cone would then expand backward in time and eventually hit a non-existent edge of space it reached the singularity. Bear in mind that the Big U should not be taken literally as his toy model of the universe, but it serves as a schematic diagram or mental aid for grasping Wheeler's unique concept of reality. ^{6}^{7}

The Big U diagram encapsulates Wheeler's “it from bit” conjecture, mentioned several times in this essay. Not only do conscious beings have the ability (at least in principle) to observe their own origin, Wheeler insists they actually participated in bringing it about. In Wheeler's participatory universe, reality boils down to a Big Thought. While conscious beings might be able to reach some sort of consensus about the general features of reality, I'm afraid myriad conscious observers, all having unique frames of reference, would have serious disagreements over important details, as Gefter recognized. This makes achieving a Grand Unified Theory of Everything very doubtful in my opinion.

66 In order to penetrate beyond the so-called CMB curtain to the big bang itself, this telescope would have to use something other than electromagnetic radiation, possibly gravity waves. But I believe most cosmologists would agree that some kind of signal coming from the big bang is possible.

67 When initially encountering Wheeler's ideas, they often sound completely bat-shit crazy. But the more you think about them, the more sense they make.

39

Appendix P – Gravity Waves, Higgs Bosons and All That Noise

The standard model of cosmology (SMC) today holds that inflation supplied the initial “kick” for the Big Bang. The famous cosmologist, Alan Guth, is credited with coming up with this idea. According to Guth, during an incredibly short time interval, somewhere between 10 ^{-}^{3}^{3} and 10 ^{-}^{3}^{2} seconds, the universe expanded exponentially from a size smaller than a proton to something around the size of a grapefruit. ^{6}^{8} Inflation supposedly solves the “problems” of the universe being so darned flat, and the temperature of the cosmic background radiation being so darned uniform. ^{6}^{9} But despite that, there is no real evidence that inflation ever took place.

So a team of scientists traveled to the South Pole and set up a microwave receiver to measure the cosmic microwave background (CMB) that made Penzias and Wilson famous. This receiver was specially designed to measure something called the “primordial B-mode polarization.” Electromagnetic waves, such as radio, microwaves, and light, can be polarized, meaning their electric field components are oriented in a single direction instead of every which way. Now according to inflation theory, the exponential inflation that started the Big Bang must have generated colossal gravity waves that bent and twisted space-time of the early universe. This bending and twisting would have squeezed the CMB waves into recognizable patterns called primordial B-mode polarization. It carried the primordial label because the polarization was supposedly established by gravity waves when the universe was only 10 ^{-}^{3}^{2} seconds old. By reading the imprint of B-mode polarization on the CMB, the scientists believed they could look behind the red-hot CMB “curtain” of ionized matter and actually “see” the grapefruit-sized universe that existed right after inflation stopped. So the idea was to look for B-mode polarization and thus prove that inflation really had taken place.

Well, after long, lonely months of making measurements at the frigid South Pole, the team had collected enough data to confirm that primordial B-mode polarization was real. Therefore, gravity waves were detected, inflation was proven, and cosmology was “solved.” Champagne bottles popped, press conferences were held, papers were published, and trumpets blared. All that remained was a ticker-tape parade down Broadway and a trip to Stockholm to pick up the inevitable Nobel Prizes.

Unfortunately, it turned out that the measurements were bogus. You see, dust in our galaxy and between the galaxies also polarizes electromagnetic waves, and there is a very large amount of dust between the South Pole and that primordial grapefruit 13.7 billion light-years away (and 13.7 billion years ago). In other words, all that intervening dust would have generated an awful lot of random noise on top of the polarization signals they were looking for. So they had a brilliant idea. Why not just subtract that noise from the measurements they took, leaving nothing but a pure signal. Here's their logic:

Signal at South Pole = Primordial B-mode Polarization + Random Noise from Dust

∴ Signal at South Pole – Random Noise from Dust = Primordial B-mode Polarization

Well, that idea just doesn't work, although I wish it did, because that would make communication engineering a whole lot easier. Unfortunately, since random noise is random, you can't cancel out one source of noise by “subtracting” noise coming from a different source. Doing that will only increase the total amount of noise. In this case, our South Pole Team took a PowerPoint slide used by another research group for a presentation on microwave polarization caused by dust, and then they simply digitized the image from the slide and subtracted those numbers from their own data! Thus, all those fancy curlycues they interpreted as “primordial B-mode polarization” were essentially noise scraped off

68 Of course, as I pointed out earlier in this essay, all this talk about what is the “size” of the universe today or what it was in the past is complete nonsense. It is impossible to define beginning and end points for making such measurements.

69 Of course, these are not problems at all. The universe appears to be flat simply because a straight line is defined as the path light takes and using light is the only way we can measure flatness from within the universe. The apparent temperature of the “cosmic background radiation” (after being red shifted over a 13+ billion-years interval) equals the temperature of the intergalactic matter in the foreground, which equals the apparent temperatures of every part of the universe at all distances corresponding to earlier epochs. The magic value of 2.7°K can be explained as the mean temperature of the universe everywhere (and everywhen) once all the red shifts are taken into account. Any deviations from this temperature are impossible because this would violate the second law of thermodynamics. Case closed.

40

a PowerPoint slide from another group. To date, no actual gravity waves have been detected from primordial B-mode polarization, or from any other technique for that matter. ^{7}^{0}

When Penzias and Wilson first “discovered” the CMB ^{7}^{1} back in 1964, it was featureless. The signal appeared to be black-body radiation coming from a surface with a temperature of 2.7°K, and it was uniform in every direction. Later, a series of satellites were launched that took more precise 360° latitudinal and longitudinal surveys of the CMB. Satellites are much better than ground-based receivers because they are above the Earth's atmosphere, which blocks much of the cosmic microwave radiation. ^{7}^{2} The first of these satellites, COBE, generated a sensation when its results came back. COBE proved the CMB was indeed thermal and there were tiny variations in temperature – about 1 part in 100,000 – between different regions of the sky. By dialing up the contrast settings, these variations could be displayed as vivid false-color maps of the sky, leading Nobel Prize winning cosmologist George Smoot to gasp, “It was like seeing the face of God!” Data from later satellites, WMAP and Planck, produced vastly more detail than COBE and even more spectacular false-color images. With greater resolution, those images are turning out to be scale-invariant fractal patterns, so it seems that Smoot hadn't really seen God's face after all. ^{7}^{3} You could also generate beautiful fractal patterns using an old laptop picked up at a yard sale for $50, which would be a lot cheaper than creating fractal images using satellites.

It should be pointed out that fractal patterns are essentially the same as noise. But this doesn't stop cosmologists from judging those CMB false-color images as being very significant, “proving” in one way or another various pet theories involving quantum gravity, primordial black holes, superstrings, magnetic monopoles, extra dimensions, parallel universes, or whatever else might catch their fancy.

The problem with all this CMB hoopla is that scientists know very well that dust also radiates microwaves. In fact, interstellar dust happens to have the same temperature as the CMB. So looking for a primordial signal from 13+ billion years ago is kind of like trying to find a very distant grey object through a thick London fog. So what do the scientists do? You guessed it: They “subtract” the foreground dust noise from the COBE, WMAP and Planck satellite signals. It's the same bogus technique they used at the South Pole to “detect” gravity waves. I wish an electrical engineer specializing in communication would take these PhD Nobel Prize winning physicists aside and tell them they simply can't extract meaningful information from signals when the signal-to-noise ratio is less than 1%. They need to bone up on Claude Shannon's work to fully undersand the problem of extracting signals from noise and how to solve it. “Subtracting” noise from noise just won't do it.

Next we turn to the Large Hadron Collider (LHC), a multibillion-euro machine operated by CERN that

employs about 10,000 physicists and various engineering types. The long sought-after Higgs boson was finally detected by the LHC in 2012. That machine generated two 3.5 TeV beams of protons (later ramped up to 6.5 TeV in 2015) traveling in opposite directions and smashing into each other at four

separate locations.

The problem is those “signals” are completely swamped out by noise; most proton-proton collisions produce a virtual cacophony of signals that are not even remotely related to the Higgs boson. Particle physicsists refer to that noise as “background.” So in order to detect the Higgs, the “background” had to be removed. Can you guess what technique they used? You're right! First, they computed “background” from data generated by earlier collisions with energies well below 3.5 TeV with Higgs bosons completely absent. When those signals were “subtracted” from the signals from the 3.5 TeV collisions, the difference just had to be the Higgs signal, right? Don't you see the fallacy here? Is science solving the reality riddle? You be the judge.

Higgs “signals” are a set of particles that Higgs bosons give off when they decay. ^{7}^{4}

70 This is surprising, considering the exploding galaxies, stars collapsing into black holes and other violent activities going on all around us. Maybe, just maybe, gravity waves are like the “luminiferous aether” that nobody could detect.

71 The word discovered is too strong. Actually, Penzias and Wilson didn't have the foggiest idea of what they had found until a group of scientists at Princeton told them it was CMB. They initially thought it came from bird poop.

72 They are also very far away from any bird poop that could contaminate microwave signals coming from space.

73 Images of various personages, especially Jesus and the Virgin Mary, often emerge from fractal patterns. I'm guessing it's because of how our brains are wired.

74 Some of the most prominent decay products are protons. Should anyone be surprised to find protons in the shower of particles that come from two beams of protons smashing together?

41

Appendix Q – An Update on the Gravity Wave Quest

I released the previous appendix before this big announcement was made: Einsteinian gravity waves were detected at 05:51 EST on September 15, 2015 using a machine named LIGO (Laser Interferometer Gravitational-wave Observatory). The announcement started a flurry of breathless news reports in the mainstream press and popular science media, and there undoubtedly will be several Nobel Prize nominations coming out soon.

The LIGO device looks a lot like the interferometer used by Albert Michelson and Edward Morley in their failed attempts in 1887 to measure the Earth's velocity through the ether. ^{7}^{5} The source of the gravity waves that LIGO reportedly detected was a collision of two black holes 1.3 billion light years away. This was a truly cataclysmic event, converting the mass of many Suns into pure gravitational energy, with a peak power output exceeding 50 times that of all the stars in the visible universe. Even so, the gravitational effect on the LIGO apparatus was small, deflecting the mirror in one of the interferometer legs a mere one ten thousandth of the diameter of a proton. Considering that defects on the mirror's surface are billions of times larger than protons, I'm a bit skeptical about the ability of this instrument to accurately measure deflections that small.

Of course, coming from an engineer and amateur scientist like myself, my skepticism probably sounds like the rantings of a crackpot. But there are others who share my doubt. Among them are Xiaochun Mei of the Institute of Innovative Physics in Fuzhou, China, and Ping Yu of Cognitech Calculating Technology Institute in Los Angeles, CA. ^{7}^{6} The Journal of Modern Physics published a paper by Mei and Yu entitled “Did LIGO Really Detect Gravitational Waves?” that questioned the assumptions, predictions, methodology, apparatus, and data analysis that led to the conclusion that gravity waves really were detected. Their doubts centered around the following.

• The ability to calculate fingerprints of gravity waves resulting from the collision of two black holes based on Einstein's field equations.

• The ability of the interferometers to detect movements of a mirror that are a mere 0.0001 times the diameter of a proton.

• The ability to screen out electromagnetic influences that are on the order of 10 ^{4}^{0} times stronger than gravity waves.

Everyone, including LIGO's cheerleaders, admits that the gravity signal was buried in an awful lot of noise. ^{7}^{7} The LIGO team didn't actually “hear” the black holes colliding, as the press releases claimed. They used computers to sift through the noise and look for a signature of a signal matching what they believed waves from colliding black holes would look like. Since Einstein's equations are non-linear, nobody can solve such a collision analytically, so it was simulated on a computer. Based on the computer simulation, the team designed digital filters matching the waveforms from the computer output, using those filters to extract the expected signal from the interferometer noise. Yikes!!

Let me use a crude analogy: Suppose I'm trying to detect a piano playing a Middle C note, so I use a tuning fork matched to Middle C plus other tuning forks matched to all of the piano's overtones and mount the tuning forks on a table next to a G.E. CFM56 jet engine running full throttle. Someone hits the Middle C key on a piano in the background, and when tuning forks are checked, their vibrations match that of a piano. What do you suppose are that the odds that the piano made the vibrations and not the noise from the jet engine? The problem with big, expensive science projects is that there is just too much at stake if the results don't pan out. I'm not saying the LIGO team is corrupt or dishonest; I'm just saying there could be a bit of confirmation bias at work here.

75 Of course, Albert Michelson and Edward Morley didn't use lasers and the LIGO apparatus is much bigger. There are two LIGO interferometers; one located in Hanford, WA and the other in Livingston, LA.

76 Mei and Yu don't seem like a couple of crackpots; however, Yu might be an engineer, which would certainly make any physicist highly suspicious of him.

77 I would think a tractor trailer rolling down I90 near Hanford and hitting a pot hole could easily jar the mirrors by more than the width of a proton.

42

Appendix R – Fuzz, Fire or Failure?

The Kavli Institute for Theoretical Physics held a workshop entitled “Black Holes: Complementarity, Fuzz, or Fire?” from August 19 ^{t}^{h} through August 30 ^{t}^{h} , 2013 in Santa Barbara, CA. The duration of the workshop – ten working days – indicated how vitally important this topic was to the theoretical physics community. The issue at hand was the so-called AMPS firewall paradox, the latest in a long series of paradoxes and contradictions involving black holes (see Appendix K, above). Two of the main protagonists in that drama, Joe Polchinski and Don Marolf, kicked things off with their talks on why there must be firewalls and what really happens inside black holes. Many of the stellar figures among string/black hole/quantum gravity theorists gave talks as well. You can watch the whole shebang on YouTube by clicking on this link: Fuzz 'n Fire Workshop

For me, the climax of the workshop occurred on Friday, August 23 ^{r}^{d} when Stephen Hawking gave a short talk via teleconference from Cambridge University in England. Hawking told the audience that the theories he had been working on for nearly 40 years were wrong, which meant that the theories the theoretical physicists in the audience had been working on for nearly 40 years were also wrong. ^{7}^{8} Of course, he didn't actually say that in so many words – he tried to break the news to them gently – but essentially that's what he said. A black hole doesn't really have a glass-smooth, razor-thin event horizon, an invisible boundary that separates reality from non-reality. Instead, there's a completely chaotic “apparent horizon,” a fuzz ball whose fuzzy surface emits ordinary thermal radiation, with no entangled qubits, wormholes, or other black-hole quantum exotica. ^{7}^{9}

You could almost hear the air leave the room during Hawking's talk. A senior member of the black hole society, Leonard Susskind, asked Hawking if he would kindly point out the mistake AMPS made in their paper. If he were really listening to Hawking, he would have known that the event horizon was the mistake AMPS made, and it's the same mistake everybody was making during the previous four decades. A visibly agitated W. G. Unruh got up next and tried to explain away everything Hawking had just said. ^{8}^{0} First, he wrote the following sentence on the blackboard.

1) Black Holes Form.

Translation: “Ignore the doubters and believe.”

Unruh wrote “Gμν = Tμν” below that sentence, which of course is the short form of Einstein's field equations. I have no idea why he wrote down the EFE, but he might as well have written e = mc ^{2} or F = ma. This is how he explained how black holes form: He drew a vertical line on the board and said something like, “Imagine this line is an event horizon. There's no way the stress tensor [the Tμν term in the EFE] can deflect stuff away from the event horizon, so stuff just falls through it. Case closed.” Unfortunately, imagining something already exists doesn't tell us anything about how it formed in the first place. That's what science has devolved into; imagining things forming but not knowing how.

You can find out many more details about the unreality of event horizons in my companion essay “Why There Are No True Black Holes” available from my Amateur Scientist Essays web page.

The rest of the workshop proceeded as if Hawking hadn't said a single word. A parade of participants gave talks on their favorite theories, all of which are based on black holes. To me, the workshop was just another epic failure of science to get at the truth. Max Planck once said, “Science advances one funeral at a time.” I don't know whether it's funerals or paradigm-changing experiments that aren't coming fast enough, but there seems to be an entire “lost generation” of theoretical physicists.

78 The dawn of this foolishness was in 1974 when Hawking invented Hawking radiation. Kip Thorne made some earlier contributions to black holology, so he deserves some credit too. I think I was 11 or 12 years old (in 1958 or 1959) when I first read a magazine article about black holes that described an event horizon. It didn't make much sense to me then, and it made even less sense to me when I got older and studied physics, higher mathematics and engineering.

79 What Hawking described sounded more or less like the surface of an ordinary star that has lots of gravity.

80 Unruh's talk was very dark and conspiratorial. He kept referring to people called “they” (backsliders like Hawking perhaps?), who apparently are trying to undermine physics. Unruh, who was approaching age 70 at the time, spent much of his career on black holes, so I guess I can understand his angst over everyone realizing that black holes don't exist.

43

Appendix S – Galactic Halos Revisited

Appendix J investigated whether halos could form from dark matter. I concluded that there is no way to form a stable halo held together by gravity unless the halo exerts internal pressure. This requires strong particle-particle interactions, which are absent in dark matter by definition. Okay, so what is really causing the orbital-velocity anomalies observed in disk galaxies? I originally proposed that perhaps a new theory of gravity is needed for large distances, or the geometry of space isn't quite three- dimensional on the cosmological scale. Both of those “fixes” involve pretty extreme measures.

I decided to step back a bit and see if I could derive orbital velocities from the equation of state for a gravitating cloud of ordinary gas using some simplifying assumptions:

1. The cloud represents most of the total mass of a galaxy.

2. The cloud is static and spherical.

1. The cloud obeys the ideal gas law. (This involves pressure that depends on strong interactions between particles. But such strong interactions are impossible with dark matter because it only interacts through gravity, which is far too weak on atomic scales).

2. The temperature of the cloud is uniform throughout.

I know the last three simplifications are not 100% true, but let's see where they lead. First of all, the internal pressure has an incremental pressure, dP(r), given by the following equation.

dP(r) = – G M(r) ρ(r) dr /r ^{2}

Here, r is the distance from the center of the sphere and ρ(r) is the density of the cloud at r. M(r) is the total mass of the cloud within r. M(r) is calculated by integrating the following expression from 0 to r:

dM(r) = 4π ρ(r) r ^{2} dr

These two equations cannot be solved without some relationship between P(r) and ρ(r). The ideal gas law provides that relationship: P(r) = ρ(r) RT, where R is the universal gas constant and T is the absolute temperature. Equation (1) can now be rewritten:

dρ(r) / ρ(r) = – G M(r) dr / (r ^{2} RT)

Equations 2 and 3 must be solved simultaneously. This is hard to do analytically, so I solved them by numerical integration using a computer spreadsheet. The chart below shows ρ(r) and M(r) plotted as functions of r with ρ(0), G, and RT all arbitrarily set to 1 without any loss in generality.

(1)

(2)

(3)

We notice that although the density of the cloud, ρ(r), trails off very rapidly, the mass contained, M(r),

44

continues to rise almost linearly. This is because the incremental volume, dV = 4π r ^{2} dr, increases by the square of the radius, suggesting the mass of the cloud becomes infinite as r → . This clearly cannot be the case, and I'll explain why later. Nevertheless, let's stick to these crude results and compute the velocity of a test object orbiting through the cloud around its center of mass at a distance r:

v _{o}_{r}_{b}_{(}_{r}_{)} = G M(r) / r

The orbital velocity along with gas density are plotted below, with G again arbitrarily set to 1.

You'll see distict similarities of the orbital velocity plotted above to typical observed orbital “anomalies” in spiral galaxies that have been published:

This suggests to me that a mysterious “dark matter” halo might just be a giant blob of very thin, cold, transparant (ideal) gas – maybe ordinary hydrogen – that hasn't coalesced into stars and other visible “stuff.” I find that the best explanation of a mystery is often the simplest and least extreme one.

By the way, the reason why M(r) can't keep increasing without limit as r → is because some of the

gas molecules in the thin outer regions reach gravitational escape velocity, v _{e}_{s}_{c} = 2 G M(r) / r . When the mean distance between collisions of gas molecules in the thinning cloud becomes extremely large, the fastest molecules simply escape from the cloud altogether, which puts a boundary around the cloud.

45

Appendix T – Seeing is Not Believing

Look at the two images of Bell Rock below.

Let's say Bell Rock on the left is Reality – the totality of everything that has existed within our causal patch from the present all the way back to the earliest time. Now consider a thin sliver of that scene, shown on the right. Would you be able to ascertain what Reality truly is based on looking at that thin sliver? Of course not. It's no exaggeration that cosmologists face the same problem of understanding Reality by using only information coming to them along the thin surface of our light cone. Looking out into the depths of space, it seems like we're seeing a big, fat, flat, three-dimensional space diorama. But we're only seeing a thin sliver; a 3D projection of a 1T + 3D Reality. Not only do we get only very limited information about the entire scene, but the information we do get is also completely misleading.

The figure below represents our causal patch contained within a light cone stretching out into space and back in time. One of the three spatial dimensions was omitted so that the figure could be drawn on paper, depicting a 1T + 2D reality instead of the 1T + 3D Reality we live in.

Any information that can be gleaned using light waves (or any other speed-of-light phenomena) must have originated on the 2D surface of the light cone, arriving at Here and Now along paths shown by the arrows. Anything that existed inside the light cone is simply not visible. ^{8}^{1} This is equivalent to trying to tell what Bell Rock looks like by only being allowed to see one thin sliver of it.

Furthermore if space is flat, the width of the light cone must increase going back in time. However, if the universe expanded from a tiny singularity, our causal patch must have been very small when the expansion started. Those two versions of Reality don't match. Space can either be flat or it can be expanding, but it can't be both. Observations provide incomplete and misleading information, which is why the field of cosmology is such a mess and the standard cosmological model doesn't make sense.

81 We can't see dinosaurs roaming the Earth by aiming telescopes at the interior of our light cone; however, we can see evidence they existed within our light cone by digging up their fossilized remains.

46

Appendix U – Deflating a Meme

Hubble's Law, which can be stated in the following form: ν = H0 · r, where ν is the red shift of an object as seen by an observer, H0 is Hubble's constant, and r is the distance between the object and the observer. Popular scientific literature often uses simple analogies to illustrate an expanding universe and Hubble's Law. The analogy of an inflating balloon is used so often it has become a meme:

The red swirly dots on the surface of the balloon represent galaxies and the blue X is an observer. The distances between the observer and two galaxies are measured instantaneously along the surface of the balloon as shown by the arrows. As the balloon inflates, all galaxies move away from the observer with more distant dots moving away faster than nearby dots in proportion to their instantaneous distances from X. The problem with this model is that light does not travel instantaneously; thus, the none of the galaxies on the surface of the balloon are visible from X because they lie outside X's light cone. The only galaxies that are visible from X are in the interior of the balloon.

One might be inclined to modify the balloon model to take this fact into account, shown below as a cross section of an expanding sphere.

Observers on the red spatial surface can only see objects along the paths of their light cones marked by the curved arrows. (The angle, φ, between the intersection of a light path and the spatial surface of any sphere must be the same everywhere because the speed of light is the same everywhere.) Here, B is visible from A and C, and D is visible from C and E. But although it might be temping to use this model instead of the common balloon meme, it must be stressed that this model too is utterly false. The geometry of an expanding universe is not Euclidean; therefore, it is impossible to correctly model an expanding universe using any object that can be drawn in Euclidean space – period.

47

## Viel mehr als nur Dokumente.

Entdecken, was Scribd alles zu bieten hat, inklusive Bücher und Hörbücher von großen Verlagen.

Jederzeit kündbar.