You are on page 1of 8

Williams: Jeffers PEAR Proposition Critique Post 1

Misrepresenting the PEAR Proposition?

A Critique of Stanley Jeffers Article The PEAR Proposition: Fact or Fallacy? Bryan Williams
[This is a post I wrote for Mike Wilsons Psi Society group on Yahoo in April of 2006, which has been revised and updated. It critically addresses some of the arguments presented in Jeffers article, which appeared in the May-June 2006 issue of The Skeptical Inquirer.]

From 1979 to 2007, the Princeton Engineering Anomalies Research (PEAR) Laboratory in the School of Engineering and Applied Science at Princeton University had provided the parapsychological and consciousness research communities with some of the most extensive and rigorously-controlled experiments relating to human-machine anomalies, or what is also known as psychokinesis (PK, or more commonly, mind over matter). It was one of the only academicaffiliated laboratories within the United States to continuously have an active research program related to parapsychology, a program that also included research on geographical remote perception (also known as remote viewing). In 2005, PEAR Director Robert Jahn and Laboratory Manager Brenda Dunne had published an article in the Journal of Scientific Exploration entitled The PEAR Proposition, in which they retraced the history of the PEAR Lab in some detail.1 While this article had highlighted many of PEARs notable findings, it was also the focus of a critique by Stanley Jeffers (a professor of physics and astronomy at York University in Canada), which appeared in the May-June 2006 issue of The Skeptical Inquirer.2 In particular, the focus of Jeffers critique is on PEARs PK-related experiments using random event generators (REGs), with the claim being made that a close examination of their primary random event generator [sic] calls the data into question (Jeffers, 2006, p. 54). Here I wish to present a critique of the critique that addresses Jeffers critical comments and shows ways in which they may be inaccurate or misleading about the PK-related experiments conducted by PEAR. But before doing so, it is imperative to first provide a brief and simplified introduction to those experiments for readers who may be unfamiliar with them. The PEAR REG Experiments: A Brief Introduction To study the ostensible interaction between mind and matter, the PEAR group had conducted a series of experiments over a 12-year period using their benchmark REG, a custombuilt device that uses a commercially available electronic noise source to produce a random series of binary bits (i.e., a series of 1s and 0s). As a convenient and familiar metaphor, we can think of the REGs method of generating random numbers as being analogous to repeatedly flipping an electronic coin into the air, and then seeing whether heads (a 1) or tails (a 0) comes up on each flip, with the probability of each outcome being 50/50. The outcomes of the REGs electronic coin flips were cumulatively represented in the PEAR experiments by a randomly moving line displayed on a computer screen, which moved up or down based on whether the REG had produced heads or tails, respectively. Ideally, under ordinary circumstances, the REG would be expected on average to produce an equal balance of heads and tails over a long series of electronic coin flips. The goal of a person participating in an REG experiment is to upset this balance by trying to influence the REG

Williams: Jeffers PEAR Proposition Critique Post 2

to produce more heads (or more tails) than would be expected by chance alone. As an indirect way of achieving this goal, the participant usually focused his/her attention on the moving line display and attempted to mentally will the line to consistently move in one intended direction. There were three directions of intention that a participant aimed for in the PEAR experiments, with each being alternately assigned in the course of an experimental session:
1.) When asked to aim high (HI, the direction associated with producing more heads), the participant tried to willfully move the line in an upward direction away from expected randomness; 2.) When asked to aim low (LO, associated with producing more tails), the participant tried to willfully move the line in a downward direction away from expectation; 3.) When asked to aim for a baseline (BL, associated with maintaining a steady balance of heads and tails), the participant tried to willfully make the line stay level around expectation, without having it move consistently in any one direction.

The overall results of PEARs 12-year experimental program with the benchmark REGs are shown graphically in Figure 1 below (adapted from Jahn et al., 1997). They indicate that, rather than being purely random as expected, the REG data conformed well to the willful intentions of the 91 volunteers who participated in the experiments. Even when the participants tried to maintain a steady baseline, there still appeared to be some degree of influence on the REG data. In contrast, control REG data collected when no one was around did not show these directional patterns, instead behaving purely random as expected.

More heads; less random


More tails; more random or noisy

Figure 1. A graphical summary of the results of PEARs 12-year experimental program on human-machine interactions using the benchmark REG. The smooth curved arcs represent the threshold of statistical significance (odds of 20 to 1 against chance). (Adapted from Figure 2 of Jahn et al., 1997)

Williams: Jeffers PEAR Proposition Critique Post 3

From a statistical standpoint, the overall results indicated a very small but highly significant effect that has combined odds amounting to about 35 trillion to one against chance, which strongly indicates that the directional patterns seen in the experimental data were not happening by chance coincidence. Something was apparently influencing the behavior of the REGs. Differences between the Experiments of Jeffers and PEAR After providing a brief review of the PEAR REG experiments in the introductory section of his critique, Jeffers (2006, pp. 55 56) raises some issues regarding the methods and procedures used by PEAR. He begins by noting that he has conducted several PK-related experiments of his own, usually in collaboration with others. The results obtained by the PEAR Lab had apparently motivated him to pursue these independent experiments, but unlike PEARs, many of his experimental results appear to be consistent with chance, and thus, have been considered largely unsuccessful. As a result, Jeffers results have been viewed within some skeptical circles as evidence that casts serious doubt on the PEAR results. However, it is important to note here that Jeffers experiments differed markedly from PEARs in several respects. For instance, rather than initially trying to reproduce the PEAR results in any direct way in some of his early experiments by asking his participants to influence an REG, Jeffers had asked them to try and influence a more complex target: the diffraction pattern produced by subatomic particles going through the classic single- and double-slit experimental set-up used in physics (Jeffers & Sloan, 1992; Ibison & Jeffers, 1998). Although one can point out that an REG and the particles going through the slit set-up are similar in principle because they both involve a random process3, they are hardly exactly comparable. And while the PEAR Lab did conduct some conceptually similar experiments in the past, involving attempts to influence the fringe pattern being produced by an interferometer (Jahn, 1982, pp. 141 143), these experiments were not as extensive or as successful as the REG experiments. Although the results in his single-slit experiment were at chance, the results in Jeffers double-slit experiment were mixed in a way that is suggestive of an experimenter effect. In the latter case (Ibison & Jeffers, 1998), the same experiment was conducted separately by Jeffers at York University, and by the PEAR group at Princeton, using the same set-up. Whereas the PEAR version of experiment had obtained marginally significant results (with odds of about 20 to 1 against chance), the same version of the experiment conducted by Jeffers had produced only chance results. In the one experiment Jeffers conducted that had come the closest to directly reproducing the PEAR REG experiments (Freedman et al., 2003), the participants were comprised of two groups: one was a group of neurological patients who had suffered damage to the frontal lobe of their brains, and the other was a group of healthy individuals, most of whom were members of the research staff.4 Both were asked to try and influence the data produced by a portable REG obtained from PEAR. Although the attempts at the task by the healthy individuals had resulted in a chance outcome, one of the neurological patients was able to produce a significant result (with odds of about 665 to 1 against chance). And when he was asked to participate in a follow-up experiment, this same patient succeeded again at the task to a significant degree (odds of about 86 to 1 against chance). This seems to indicate that there was a modest PK effect in at least part of the data.

Williams: Jeffers PEAR Proposition Critique Post 4

Despite this, there were again some notable differences between this experiment and the ones conducted by PEAR. For instance, the participants in the PEAR experiments were all healthy volunteers, rather than being neurological patients or research staff. Relative to the amount collected by PEAR, the total number of REG samples collected in this experiment was notably smaller, which could have affected the chances of obtaining a significant result. These and other differences were pointed out by PEAR physicist York Dobyns (2003) in a published commentary. In mentioning his experiments, Jeffers (2006) describes his use of a method of experimental control that is quite different from the one used by PEAR. As he states in his critique:
One characteristic of the methodology in experiments in which I have been involved is that for every experiment conducted in which a human has consciously tried to bias the outcome, another experiment has been conducted immediately following the first when the human participant is instructed to ignore the apparatus [i.e., the slit set-up in the case of his early experiments, or the REG in the case of his experiment with the neurological patients]. Our criterion for significance is thus derived by comparing the two sets of experiments. This is not the methodology of the PEAR group, which chooses to only occasionally run a calibration test of the degree of randomness of their apparatus (pp. 55 56).

Although Jeffers claims that this method is a more sound control than the one used by PEAR, Dobyns (2003) makes several statistical and procedural points to the contrary. In addition, the method may be susceptible to a potential confound, but in order to understand how, a few points have to be made. The first point is that our current knowledge about the duration and reach of PK effects is rather limited.5 Thus, there is no way we know of (as of yet) to start and stop PK from occurring within a given period. The second point is that participants in a PK experiment do not always have to direct their attention toward an REG in order to be able to influence it. The research using field REGs, which was initiated by the PEAR group (Nelson et al., 1996, 1998), offers one of the clearest indications of this. This seems to suggest that PK can operate unintentionally as well as intentionally (which might reflect dual unconscious and conscious aspects to psi). One might argue that both of these points could potentially factor into a situation where a control period is carried out immediately after the experimental period by considering the possibility that some PK effects could unintentionally carrying over from the experimental period into the control period. Addressing this kind of possibility in relation to their field REG data, the PEAR group had stated:
When it is feasible to take [matching control] data in a given environment before and after the designated experimental segments, some of the surround time periods themselves may be subject to the same influences as the active segments. (Indeed, even in laboratory experiments there is evidence that traditional control data may not be immune to anomalous effects of consciousness) (Nelson et al., 1998, p. 452).

If this has at least some plausibility, then control periods that immediately follow experimental periods may be confounded as to whether or not they are purely free from the effects of PK, further raising the question as to whether they can be classified as true controls. For that reason, it might be better to temporally space the experimental and control periods apart so that any lingering influences by the participant (whether intentional or unintentional) are less likely to enter into the control data.

Williams: Jeffers PEAR Proposition Critique Post 5

The Mind/Machine Consortium Replication: A Complete Failure? At the end of his section on methods and procedural issues, Jeffers (2006, p. 56) mentions the attempts by a consortium of three laboratories in the U.S. and Germany6 to independently reproduce the original REG findings produced by PEAR (Jahn et al., 2000). Like many skeptics, Jeffers pays attention only to the fact that [t]hese attempts failed to reproduce the claimed effects. Even the PEAR group was unable to reproduce a credible effect (p. 56). However, were these attempts really a failure in the purest sense of the word? When the consortiums results are examined closely, there is some possible indication that perhaps they werent. As Dean Radin (2006, pp. 155 156) points out in the chapter on PK in his book Entangled Minds, although the collective results of the consortium did not reach statistical significance, the same directional patterns of influence observed in the PEAR database were basically indicated within those results. Figure 2, which reproduces Radins graph using the same data, illustrates this.
4 0.4 0.3 3 0.2

z-score (Consortium)

0.1 0

z-score (PEAR)

-0.1 -0.2 -0.3


-0.4 -0.5

-2 -0.6 -3 -0.7




Direction of Mental Influence

Figure 2. Comparison of the REG results originally produced by the PEAR Lab (colored bars) with those collectively produced by the consortium of three laboratories (dots) in their attempts to reproduce the original PEAR results. While the magnitudes of the two results were greatly different, they both exhibited the same directional patterns of influence. Based on Figure 9-4 of Radin (2006).

In addition, several internal patterns were observed in the consortium data which were similar to those seen in other PK-related experiments. These patterns included, for example, series position effects, in which PK performance gradually decreases (or increases) during the first part of an experimental series, and then begins to gradually reverse direction and rebound during the

Williams: Jeffers PEAR Proposition Critique Post 6

later part, forming something akin to a flattened U-shaped pattern. Such effects were observed in the PEAR data (Dunne et al., 1994), as well as in the early PK experiments conducted by J. B. Rhine and his colleagues at Duke University, in which participants attempted to influence rolling dice (e.g., Reeves & Rhine, 1943; Rhine, 1969). Series position effects have also been commonly found in several mainstream psychological experiments (Thompson, 1994). The Issue of Baseline Data In the third section of his critique, Jeffers (2006, p. 56) draws attention to what he believes to be a serious issue in trying to interpret the PEAR results, using three graphs of the PEAR REG database as it progressed over time. One of these graphs was the one shown in Figure 1 above, which shows the results of the complete 12-year database collected by the PEAR group with their benchmark REG. In that graph, Jeffers points to the observation that the baseline data (BL in Figure 1) gradually begin to exhibit a directional pattern over the course of database, moving upward away from expected randomness and eventually exceeding the threshold for statistical significance. On the basis of the observation, Jeffers argues that [t]his has to call into question the claimed statistical significance of the data labeled HI and LO in the same plot (p. 56). However, its important to carefully recognize that Jeffers argument seems to rest upon the assumption that the word baseline is synonymous with the word control. One implicit indication of such an assumption comes from Jeffers description of the baseline result, which he says is obtained from data in which no effort is made to bias the equipment (p. 56). Jeffers seems to thus argue that the non-random pattern exhibited by the baseline data in Figure 1 is an indication that theres a problem with the REGs, such that theyre likely to be malfunctioning and not producing purely random data as expected. But as indicated in the brief introduction to the PEAR experiments presented earlier, the assumption that the baseline condition represents a pure control condition would not be entirely accurate in this case, because baseline is in fact one of the directions of intention being aimed for by the participant, and thus, there is an effort being made by the participant to bias the baseline data, but only in the sense of trying to maintain a steady baseline. This indicates that the baseline data are still subject to influence by the participants, and so the baseline condition cannot be considered a control condition in any way because it is in fact part of the experimental condition. Elsewhere in their publications (e.g., Jahn et al., 1997, pp. 347 348), the members of the PEAR group indicate that what more accurately represents a pure control is their calibration data, which were collected from the REGs when no one else was around, and which conformed to chance as expected. Furthermore, even though the baseline data appear to reach significance on Figure 1, the PEAR group notes (Jahn et al., 1997, p. 350; Jahn & Dunne, 2005, p. 201) that two-tailed statistics are required in the evaluation of these particular data instead of one-tailed because there is no prior prediction made for their direction; it is only assumed that they will fluctuate around expectation. According to the use of two-tailed statistics, the overall result for the baseline data would fall short of statistical significance at odds of 20 to 1 (they would only have odds of around 11 to 1). So even if one could accurately assume that the baseline condition constitutes a pure control condition, the baseline data would still not be at a level of statistical significance adequate enough to allow them to be labeled as either anomalous or false. Conclusion

Williams: Jeffers PEAR Proposition Critique Post 7

Based on the points made here, it would seem that the argument made by Jeffers that the PEAR data can be called into question is not adequately justifiable in this case. In addition, this critique of the critique should show that it is just as important to be careful and cautious in considering the criticisms made against parapsychology-related research as it is in considering the research itself.
Notes 1.) The history of the PEAR Lab and its results are also extensively covered by Jahn and Dunne in their recent book Consciousness and the Source of Reality: The PEAR Odyssey (Princeton, NJ: ICRL Press, 2011). 2.) The magazine published by the Committee for Skeptical Inquiry, which was formerly known as the Committee for the Scientific Investigation of Claims of the Paranormal, or CSICOP. 3.) Like the activity of the electrons producing the noise in an REG, particles going through the slit set-up have probabilities associated with them as to whether they will act as particles or as waves when encountering the double-slit configuration, and as to where they will be detected on the surface displaying the diffraction pattern. 4.) The choice to have neurological patients as participants follows from an interpretation by Freedman et al. (2003) of some ideas proposed by Jahn and Dunne (1986) of how the concept of wave-particle duality may be metaphorically extended to consciousness. Of this, Freedman et al. write: Based on data from the PEAR lab, they suggest that consciousness has the potential to influence random physical events and that this effect is maximal when consciousness is exhibiting wave properties rather than particle properties. Jahn and Dunne propose that the wave properties of consciousness correlate best with a state in which individuals are able to divert their attention away from their self-awareness in relation to events around them. This analogy suggests that states of reduced selfawareness may facilitate the effects of consciousness on physical phenomena (p. 652). Thus, on the basis that some neurological evidence indicates that the frontal lobe is involved in the experience of self-awareness, patients with damage to the frontal lobe were asked to participate. 5.) This is probably due to the limited amount of process-oriented research on PK that has been possible, since throughout much of its history, the focus in parapsychology has been largely on proof-oriented research. 6.) Comprised of PEAR and two German labs: the Institut fr Grenzgebiete der Psychologie und Psychohygiene (IGPP) in Freiburg, and Justus-Liebig Universitt in Giessen.

Dobyns, Y. H. (2003). Comments on Freedman, Jeffers, Saeger, Binns, and Black: Effects of frontal lobe lesions on intentionality and random physical phenomena. Journal of Scientific Exploration, 17, 669 685. Dunne, B. J., Dobyns, Y. H., Jahn, R. G., & Nelson, R. D. (1994). Series position effects in random event generator experiments. Journal of Scientific Exploration, 8, 197 215. Freedman, M., Jeffers, S., Saeger, K., Binns, M., & Black, S. (2003). Effects of frontal lobe lesions on intentionality and random physical phenomena. Journal of Scientific Exploration, 17, 651 668. Ibison, M., & Jeffers, S. (1998). A double-slit diffraction experiment to investigate claims of consciousness-related anomalies. Journal of Scientific Exploration, 12, 543 550. Jahn, R. G. (1982). The persistent paradox of psychic phenomena: An engineering perspective. Proceedings of the IEEE, 70, 136 170. Jahn, R. G., & Dunne, B. J. (1986). On the quantum mechanics of consciousness, with application to anomalous phenomena. Foundations of Physics, 16, 721 772. Jahn, R. G., & Dunne, B. J. (2005). The PEAR proposition. Journal of Scientific Exploration, 19, 195 245. Jahn, R., Dunne, B., Bradish, G., Dobyns, Y., Lettieri, A., Nelson, R., Mischo, J., Boller, E., Bsch, H., Vaitl, D., Houtkooper, J., & Walter, B. (2000). Mind/machine interaction consortium: PortREG replication experiments. Journal of Scientific Exploration, 14, 499 555.

Williams: Jeffers PEAR Proposition Critique Post 8

Jahn, R. G., Dunne, B. J., Nelson, R. D., Dobyns, Y. H., & Bradish, G. J. (1997). Correlations of random binary sequences with pre-stated operator intention: A review of a 12-year program. Journal of Scientific Exploration, 11, 345 367. Jeffers, S. (2006, May-June). The PEAR Proposition: Fact or fallacy? The Skeptical Inquirer, pp. 54 57. Jeffers, S., & Sloan, J. (1992). A low light level diffraction experiment for anomalies research. Journal of Scientific Exploration, 6, 333 352. Nelson, R. D., Bradish, G. J., Dobyns, Y. H., Dunne, B. J., & Jahn, R. G. (1996). FieldREG anomalies in group situations. Journal of Scientific Exploration, 10, 111 141. Nelson, R. D., Jahn, R. G., Dunne, B. J., Dobyns, Y. H., & Bradish, G. J. (1998). FieldREG II: Consciousness field effects: Replications and explorations. Journal of Scientific Exploration, 12, 425 454. Radin, D. (2006). Entangled Minds: Extrasensory Experiences in a Quantum Reality. New York: Paraview Pocket Books. Reeves, M. P., & Rhine, J. B. (1943). The PK effect: II. A study in declines. Journal of Parapsychology, 7, 76 93. Rhine, J. B. (1969). Position effects in psi test results. Journal of Parapsychology, 33, 136 157. Thompson, A. (1994). Serial position effects in the psychological literature. Journal of Scientific Exploration, 8, 211 215.