Google
 

 

 



On Complexity and Pragmatism

Dean Radin, Ph.D.


INTRODUCTION

The cybernetist, W. Ross Ashby, once wrote an article discussing the implications of "a fundamental limitation on the human intellect, especially as that intellect is used in scientific work." (Ashby, 1958, p 94). In essence, Ashby argued that the experimental methods used in the complex sciences of human behavior, economy, and medicine relied on methods developed primarily for physics and chemistry. Such methods, entirely suitable for homogeneous, weakly interacting systems were, Ashby argued, completely inappropriate for complex systems, which are often highly heterogeneous and strongly interacting.

Ashby went on to suggest that "understanding" a complex system is very different than understanding a simple system. If understanding a system means developing a model that is isomorphic with the system, and characterizing it mathematically, and perhaps holding it in one's head, then "when the complexity of the system exceeds the finite capacity of the scientist, the scientist can no longer understand the system ...." (p. 97).

One important consequence of this cognitive limitation is that certain complex systems have to be understood in operational terms. That is, for some complex systems, it may not be possible to build theories, or know "why" something works, but we may know "how" something works.

My research is focused on the complex system of human consciousness, especially those perplexing anomalies of consciousness called "psychic abilities," such as remote viewing and psychokinesis. This topic is not merely complex; it is hyper-complex. We are not just dealing with within-disciplinary complexities such as the neurology of consciousness, or subliminal perception, or how belief influences experimental designs and outcomes, or the sociology of frontier science. Instead, we are dealing with all of those factors throughly entangled with environmental, historical, biological, political, and even theological import.

Given this complexity, I am not surprised that many scientists are skeptical of psychic phenomena. It takes about two decades of intensive study to become an expert in a scientific sub-speciality, so who has the time to become knowledgeable about a highly complex, controversial field? When viewed from a single, conventional scientific perspective, claims of psychic phenomena seem foolish. When viewed simultaneously from several scientific disciplines, such claims often seem less foolish. And when viewed from many disciplines, spanning science, history, art and religion, the phenomena almost begin to make sense. Yet few scientists have had the patience or the motivation to cram enough broad-spectrum knowledge in their heads to grasp why. Most of us want the "big stuff" before we become interested enough to get the motivation to pay attention to it. We want undeniable, statistics-free, in-your-face psychic phenomena landing on the front lawn of the White House.

Thus, my colleagues and I are faced with a dilemma: We think we are sitting on empirical evidence of something very interesting -- genuine psychic phenomena -- yet hardly anyone is paying attention to it because you need to be a specialist to appreciate the nature of the data and why it is interesting.

My present approach to resolving the dilemma is to fully acknowledge the complexities of consciousness, to respect our cognitive limitations, and to focus on pragmatics. As a result, our research in the Consciousness Research Division at the University of Nevada, Las Vegas, is designed to circumvent the "does it exist," proof-oriented scientific controversy, and jump directly to practical, industrial use of psychic effects. Ultimately, of course, the proof will be in the pudding. But what prompted us to explore this pudding in the first place?


Beyond High Technology

There is a fast-growing trend in the field of human-computer interaction (HCI): The evolution is away from isolated libraries, telephones and computers and towards a more intimate union among people and a host of information and communication technologies. As we move from computer keyboards, to pointing devices, eyetrackers, speech understanding, and virtual realities, the boundaries between humans and machines begin to blur. I envision a time when we will witness a profound blurring - a subtle yet direct interaction between mind and the operation of machines. This will be accomplished without deciphering brainwaves, as is already being explored (Gevins et al, 1987; Daviss, 1994). Instead (dodging the debate on monism vs. dualism) I refer to something like the "ghost in the machine," that is, machines that respond to and directly interact with human consciousness.

I suggest that these direct interactions between mind and machine are even now associated with rare, spontaneous computer failures. I also suggest that what is currently viewed as annoying or delightful coincidences - depending on whether your machine mysteriously fails or recovers at precisely the right (or wrong) time - will eventually be harnessed into a new technology of direct mind-machine interaction (DMMI). DMMI technologies offer the promise of solving several problems that are presently intractable in economic or human terms.

Finally, I suggest that primitive versions of these DMMI technologies will be demonstrated much sooner than most people realize, possibly before the turn of the century.


Why do systems fail?

Considering the interdependence of human activities and computer-based technologies in virtually all domains of modern life, it has become vitally important to understand why these systems sometimes fail (McCarthy, 1988; Pelegrin, 1988). Great strides have been taken in the design of fault-tolerant computers (Avizienis et al, 1987), and causes of the great majority of computer system failures can now be traced to either human factors or machine factors. Human factors include poor user interface design, stressful work environments, logical and functional design errors, and software bugs (Schneiderman, 1987). Machine factors include circuit board failures, power surges, and electromagnetic interference (Parhami, 1988; Stiffler, 1981).


When failure categories fail

However, it is not always possible to assign failures to known categories. While some unexplained failures can undoubtedly be resolved with sufficient detective work, as computer systems become more complex, distributed, and interdependent, assigning the ultimate cause of a failure becomes much more difficult (Hornick, 1987; Petroski, 1985). Chaos theory, for example, indicates that there are severe limits on our ability to predict the future of completely deterministic systems, including computers (Percival, 1992; Hecht & Dussault, 1987). Even redundant, fault-tolerant computer systems sometimes fail in mysterious ways (Marks & Kammann, 1980). Therefore, besides examining the known categories of human and machine factors for possible sources of system failures, it is also productive to explore a less well-understood intermediary: gremlins.


Gremlins?

Some people are renowned for their ability to fix machines. Others are prohibited from even being in proximity to electronic equipment during important demonstrations, for fear that the equipment will fail. Marks and Kammann (1980) refer to this phenomenon as the "gremlin effect." In fact, the apparent tendency of things to go wrong at the worst possible time is so prevalent that Murphy's Law is half-seriously regarded as "first principle" in engineering and scientific circles. Many such superstitions undoubtedly arise as a result of psychological factors such as selective human memory, and some are probably related to factors such as personality traits associated with high versus low accident involvement or personality mismatches between system designers and end-users.

However, after sifting through the odd coincidences and unexplained glitches, a residue of anecdotes and a growing body of laboratory research suggests that this lab lore may arise from something else. Among the many anecdotes about unusual human-machine interactions, Gamow (1959) describes the "Pauli Effect" as follows: "It is well known that theoretical physicists are quite inept in handling experimental apparatus; in fact, the standing of a theoretical physicist is said to be measurable in terms of his ability to break delicate devices merely by touching them. By this standard, Wolfgang Pauli was a very good theoretical physicist; apparatus would fall, break, shatter or burn when he merely walked into a laboratory."

Other experimenters, such as Edison, were legendary for their ability to get complex laboratory apparatus working with extraordinary speed (Price, 1984). Such anecdotes, as well as dozens of others that arise in every technical environment, give rise to the nervous laughter associated with Murphy's Law. Can such things be explained? Are they related to what I have called DMMI effects? When I was on staff at AT&T Bell Laboratories in the early 1980's, I decided to explore these questions by putting Murphy's Law to the test.


THE EVIDENCE


A thought experiment

How can we test if human thought (intention, will, wishes) and computer operations are directly interdependent? Specifically, how can we objectively test whether conscious mental intention interacts with sensitive electronic circuits such as those found in computers?

Consider the following experiment, which will be familiar to readers of this Journal: An electronic circuit is devised which produces sequences of random bits, similar to circuits used in electronic gambling games and digital encryption key generators. The source of randomness in the circuit is either electronic noise or radioactive decay, as both provide truly random events.

The device is designed to generate say, 100 bits when a button is pressed. As each bit is generated, it is matched against an alternating "target" bit, i.e., 0 1 0 1.... When a generated random bit matches the target bit, a counter increments, and at the end of the 100 bit sequence, a display shows the number of matches, or hits. Chance expectation predicts that the displayed number of hits will be 50 with a standard deviation of 5.

Now you ask a person to do three things: First, simply press the button and wish that the displayed number is greater than 50. This is called a trial. On the second trial, the subject wishes for the number to be less than 50, and on the third trial, the subject just presses the button and thinks about some distracting task as a control. This "tri-polar" protocol is repeated thousands of times with many different subjects, and the outcome is evaluated statistically to see if the cumulative wishes are associated with biases in the electronic device's output.


Experimental results

Such experiments have been conducted by numerous researchers and published in the open scientific literature, including this Journal. When I was working at AT&T Bell Labs between 1980 to 1985, I launched a series of independent replications, eventually conducting some 45 such experiments using experimental protocols and electronic devices like those described above (Radin & Utts, 1989; Radin, 1982). My colleagues at Bell Labs served as experimental subjects. Thirteen of my experiments were significant at the p = .05 level, resulting in an exact binomial probability of p = 2.33 × 10-8. Similarly conducted control tests produced results expected by chance.

From 1985 to 1989, while working first at SRI International and later at Princeton University, I had the opportunity to conduct several additional experiments using different sources of randomness and new experimental protocols (Radin & May, 1987; Radin, 1988). These experiments also produced statistically significant results, confirming my previous observations. A decade of research demonstrated to my satisfaction that under strictly controlled conditions, one could show that mental intention was predictably correlated with the behavior of a machine. In other words, Murphy's Law seemed to be more than mere superstition.

As a set of observations by a single researcher, these results were intriguing, but they did not count as conclusive in any formal scientific sense -- the keystone of science is independent replication and consensus agreement. In addition, in controversial realms where unconscious biases may sway one's judgment, it is advisable to consider the expert opinions of independent scientific review boards. So, although I accepted my own experimental results, I found it difficult to fully acknowledge the implications of my data unless I had a good reason to believe that my results were not an isolated case.


Expert opinion

In searching for the opinions of scientific review boards, I discovered that because of the possible technological and strategic implications of DMMI phenomena, experiments in this realm had been reviewed in depth during the decade of the 1980's by four separate US government-sponsored scientific review boards. These reviews were conducted by the US Congressional Research Service (US Library of Congress, 1983), the US Army Research Institute (Palmer, 1985), the US National Research Council (Swets & Druckman, 1988; Harris & Rosenthal, 1988a, b) and the Congressional Office of Technology Assessment (1989). All four committees agreed that the evidence for DMMI merited serious attention by the scientific community, and suggested, as the Congressional Research Service put it, the existence of "an interconnectiveness of the human mind with other minds and with matter."

The four committees disagreed about the extent to which these experiments were independently replicable, about potential artifacts and flaws in some experiments, and about the degree to which selective reporting practices may have inflated the overall estimate of success.


I never meta-analysis I didn't like

To help resolve the disagreements raised by the four scientific review boards, a colleague (Roger Nelson) and I conducted a quantitative meta-analysis of all published DMMI experiments (Radin & Nelson, 1989). A meta-analysis is an integrative review of all experiments studying the same effect or hypothesis. Because a meta-analysis is concerned with the actual outcome of an experiment, rather than simply whether it was reported as significant or nonsignificant, it allows one to quantitatively determine replication rates, to judge the relationship between study outcomes and experimental quality, and to assess the plausibility that selective reporting might account for the observed end-result. Meta-analyses are now widely accepted in the social, behavior, and medical sciences as valuable quantitative tools for summarizing large bodies of empirical evidence.

Our meta-analysis retrieved 152 experimental reports from refereed journal articles, technical reports, dissertations, conference proceedings, and unpublished manuscripts. These reports were written by 68 principal investigators, representing 15 laboratories in 8 countries, who together published a total of 597 experiments, consisting of over one billion "mentally influenced" bits, and 235 control studies, consisting of over two billion bits.

These experiments were conducted beginning in the mid-1950's at US government laboratories, Boeing Laboratories, AT&T Bell Laboratories, MIT, Princeton University, University of Edinburgh, and many other industrial and academic labs (e.g., Jahn, 1982; Smith et al, 1963; Hall et al, 1977). Most of the experiments were conducted by physicists interested in whether conscious observation might affect quantum states and by psychologists interested in studying the nature of human intention. The overall results showed that control data conformed to theoretical chance expectation, but the experimental data was highly significant, equivalent to a 15 standard error shift of the mean from chance. In other words, the results of these studies was not due to chance.


The filedrawer problem

One might object that the overall estimate of the DMMI effect was inflated because of selective reporting practices. It is well known that experiments with null and negative results are not published as often as experiments with successful results, and since a meta-analysis relies on published reports, the overall result may be smaller if we knew about all of those (potentially) unpublished, non-significant studies. Missing studies are called the "filedrawer problem." There are a variety of ways of assessing the consequences of the filedrawer problem. One way is to calculate "failsafe" number, which is the estimated number of unretrieved or unpublished studies, averaging a zero effect, which would be needed to shift the overall results down from the observed value to a non-significant value (Rosenthal, 1989). For DMMI experiments, this turned out to be 54,000 studies, suggesting that the observed effect was not plausibly due to selective reporting practices. But possibly the results were due to methodological problems or poor experimental quality?


Experimental quality

To assess experimental quality, a set of sixteen quality criteria were developed. These criteria covered all valid criticisms that had been published about the methodology of DMMI experiments. Each of the 597 experimental studies were reviewed for the presence or absence of these criteria, assigning "1" if the criterion was present and "0" if it was absent. The overall quality score was the sum of the individual criterion scores. Thus, a 0 represented poor quality and a 16 represented excellent quality. This is an accepted, conservative way of assessing quality, because it relies on what the investigators actually reported. Investigators who failed to report their studies in full tended to receive lower quality scores.

Contrary to the hypothesis that the effect would disappear as experimental quality improved, this analysis revealed a tendency for better controlled studies to produce slightly larger effects. Thus, the DMMI effect was not due to the filedrawer problem or to any known methodological problems.


Meta-analysis summary

The meta-analysis determined that the DMMI effect observed in these experiments was (a) not due to chance, (b) successfully replicated by many different experimenters, (c) not correlated with potential methodological flaws or artifacts, and (d) not accounted for even if more than 50,000 studies averaging a null effect had been overlooked in the process of searching the literature.

A few years later, another colleague and I conducted another meta-analysis, this time looking at DMMI with physical objects (falling dice) as the targets (Radin & Ferrari, 1991). This study retrieved 148 experiments, reported by 52 investigators over 30 years, involving more than 2 million trials by 2,569 subjects. The results were similar to the prior meta-analysis. This large body of experiments provided persuasive statistical evidence for DMMI on physical objects: The overall effect for experimental data was more than 19 standard errors from chance.


"IF IT IS REAL, PUT IT TO WORK"


Given the substantial empirical evidence for DMMI effects, I decided to explicitly test the possibility that DMMI-mediated effects might be responsible for some computer failures. At the time (1990), I was working at Contel Technology Center (the R&D arm of Contel Corporation, a multi-billion dollar telecommunications company). I designed an experiment that used only commercially available, off-the-shelf equipment. A random number generator on a chip (called the T7002, AT&T Microelectronics, 1988) was used as the DMMI "target" to simulate an unstable electrical circuit within a computer. The idea was that erratic circuitry might be susceptible to small DMMI effects (Desmond, 1984; Morris, 1986). Two experiments using the chip were successful in demonstrating DMMI precisely where it was predicted to appear (p = .02 & p = .002, respectively) (Radin, 1990).

The next step was devising a method of putting the effect to work. Taking advantage of an internal competition within Contel to develop a future-oriented communication technology, I proposed to build a prototype "thought-switch," then test it in-house to see if we could demonstrate proof-of-principle. The project was approved, and the device was built and tested in late 1990. The prototype incorporated a new type of DMMI detector, and used some fancy statistical and digital signal processing techniques to analyze the results (Radin, 1989, 1990-1991, 1993). The test involved 10 volunteers selected from Contel Technology Center who were asked to mentally influence the system in strictly prescribed ways. The experiment was successful (Radin & Bisaga, 1992), prompting us to prepare a patent disclosure. Unfortunately, immediately after the prototyping tests were completed, GTE Corporation merged with Contel and the disruption of the merger halted all efforts on this project.


WHERE DO WE GO FROM HERE?


Applications

Because the empirical data indicates that DMMI phenomena are not mediated by electromagnetic fields and are apparently not limited by distance (Dunne & Jahn, 1992), applications for DMMI devices include (1) a low-bit-rate signaling path for deep sea or deep space craft (including sending backup signals to lost satellites, such as the $1 billion Mars Observer that went missing in August, 1993); (2) prosthetic devices for paraplegics, such as thought-controlled robotic exoskeletons; and (3) secure entry systems and communication devices based upon person-unique "mind-prints."


Industrial interest

While Western science and technology have more or less dismissed the DMMI effect as a minor laboratory curiosity, or as an outright illusion, Japanese electronics giants have taken it much more seriously. R&D efforts have been reported at NEC, Uniden, and Matsushita (Asian Wall Street Weekly, 1985).

More recently, an article in R & D Magazine (1993) reported that Japan's Ministry of International Trade & Industry (MITI) created a study group drawing together academic, business, and government representatives. The group's name roughly translates as "Sensitivity Business Study Group." One of the topics seriously being studied by the MITI group is DMMI phenomena. Perhaps in response to the growing realization that there is an enormous technological and financial advantage to being the first to harness these phenomena, Sony Corporation established two laboratories in Tokyo in late 1992, specifically devoted to exploring DMMI applications (Dr. Robert Morris, personal communication, July 1993). This year (1994), my Center has attracted financial support for DMMI R&D from a major US-based microelectronics and telecommunications corporation.


CONCLUSIONS


In this paper, I proposed that the evolution of the human-computer interface will soon reach a stage where we will begin to see direct mind-machine interaction technologies. I have presented a summary of the empirical evidence supporting this proposal, and I have briefly mentioned some of the industrial interest in this potential new technology.

I imagine that some readers will be perplexed and perhaps a little disturbed by my audacious story about a budding DMMI technology. Surely something as revolutionary as a technology of genuine mind-machine interaction would be front-page news? One response is to point to a recent cover article on telepathy in New Scientist [McCrone, 1993)] and to other references cited in recent mainstream sources [Atkinson et al, 1990; Utts, 1991; Bem & Honorton, 1994].

Another response is to explain that retrieving the evidence cited here required careful digging in specialized literature (such as the present Journal), as well as years of first-hand participation in the relevant scientific domain. Most of these experiments only produce small statistical effects, and there are essentially no adequate theories to explain the results. As a result, science journalists rarely cover the "story," and even some of my knowledgeable colleagues question whether a DMMI technology will be feasible in the short-term. Fortunately, industrial interest and funding for "emerging technologies" are significantly rising as world-wide competition for new devices and markets heats up, so we now have the opportunity to test the feasibility of such technologies rather than just talk about them.

It will take some time for scientists, engineers, and journalists to catch up with the relevant literature and with the scientific and theoretical implications of DMMI phenomena. However, I believe that as the studies mentioned here begin to infiltrate the mainstream, it will no longer be a matter of if, but when and in what form DMMI technologies will appear.


REFERENCES


Ashby, W. R. (1958). Requisite variety and its implications for the control of complex systems. Cybernetica, 1, 83-99.

Asian Wall Street Weekly, April 8, 1985, p. 18.

AT&T Data Sheet (February, 1988), T7001 Random Number Generator, Document DS88-43SMOS.

Atkinson, R. L., Atkinson, R. C., Smith, E. E., and Bem, D. J. (1990). Introduction to psychology, 10th ed. San Diego: Harcourt Brace Jovanovich.

Avizienis, A., Kopetz, H. & Laprie, J.C. (Eds.) (1987). Evolution of fault-tolerant computing. Vienna, Austria: Springer-Verlag.

Bem, D. J. & Honorton, C. (1994). Does psi exist? Replicable evidence for an anomalous process of information transfer. Psychological Bulletin, 115, 4-18.

Daviss, B. (1994). Brain powered. Discover, May, 58-65.

Desmond, J. (June 11, 1984). Computer crashes: A case of mind over matter? Computerworld, 18, 24, 1-8.

Dr. Robert Morris, personal communication, July, 1993.

Dunne, B. & Jahn, R. (1992). Experiments in remote human/machine interaction. Journal of Scientific Exploration, 6 (4), 311-332.

Gamow, G. (1959). The exclusion principle. Scientific American, 201, 74-86.

Gevins, A.S., Morgan, N.H., Bressler, S.L., Cutillo, B.A., White, R.M., Illes, J., Greer, D.S., Doyle, J.C. & Zeitlin, G.M. (1987). Human neuroelectric patterns predict performance accuracy. Science, 235, 580-585.

Hall, J., Kim, C., McElroy, B., & Shimony, A. (1977). Wave-packet reduction as a medium of communication. Foundations of Physics, 7, 759-767.

Harris, M. J. & Rosenthal, R. (1988a). Interpersonal expectancy effects and human performance research. Washington, DC: National Academy Press.

Harris, M. J. & Rosenthal, R. (1988b). Postscript to interpersonal expectancy effects and human performance research. Washington, DC: National Academy Press.

Hecht, H. & Dussault, H. (1987). Correlated failures in fault-tolerant computers. IEEE Transactions on Reliability, R-36, 171-175.

Hornick, R. J. (1987). Dreams - design and destiny. Human Factors, 29, 111-121.

Jahn, R. G. (1982). The persistent paradox of psychic phenomena: An engineering perspective. Proceedings of the IEEE, 70, 136-170.

Marks, D. F. & Kammann, R. (1980). The psychology of the psychic. Buffalo, NY: Prometheus Press.

McCarthy, R. L. (1988). Present and future safety challenges of computer control. Computer Assurance: COMPASS '88 (IEEE Catalog No. 88CH2628-6), New York: IEEE, 1-7.

McCrone, J. (May 15, 1993). Roll up for the telepathy test. New Scientist, 138 (1873), 29-33.

Morris, R. L. (1986). Psi and human factors: The role of psi in human-equipment interactions. In B. Shapin & L. Coly (Eds.), Current Trends in Psi Research. New York, Parapsychology Foundation, Inc., pp. 1-26.

Office of Technology Assessment (1989). Report of a workshop on experimental parapsychology. Journal of the American Society for Psychical Research, 83, 317-339.

Palmer, J. (1985). An evaluative report on the current status of parapsychology. US Army Research Institute, European Science Coordination Office, Contract Number DAJA 45-84-M-0405.

Parhami, B. (1988). From defects to failures: A view of dependable computing. Computer Architecture News, 16, 157-168.

Pelegrin, M. J. (1988). Computers in planes and satellites. In W. D. Ehrenberger (Ed.), Proceedings of the IFAC Symposium, Oxford, UK: Pergamon Press, 121-132.

Percival, I. (1992). Chaos: a science for the real world. In N. Hall (Ed.) The New Scientist Guide to Chaos, London: Penguin Books, pp. 11-21.

Petroski, H. (1985). To engineer is human: The role of failure in successful design. New York: St. Martin's Press.

Price, D. J. (1984). Of sealing wax and string. Natural History, 1, p.49-56.

R & D Magazine (June, 1993), p. 21.

Radin, D. I. & Bisaga, G. (1992). Towards a high technology of the mind. Research in parapsychology 1991, Metuchen, NJ: Scarecrow Press.

Radin, D. I. & Ferrari, D. C. (1991). Effects of consciousness on the fall of dice: A meta-analysis. Journal of Scientific Exploration., 5, 61-84.

Radin, D. I. & May, E. C. (1987). Testing the intuitive data sorting model with pseudorandom number generators. Research in Parapsychology 1986, Metuchen, NJ: Scarecrow Press.

Radin, D. I. & Nelson, R. D. (1989). Evidence for consciousness-related anomalies in random physical systems. Foundations of Physics, 19, 1499-1514.

Radin, D. I. & Utts, J. M. (1989). Experiments investigating the influence of intention on random and pseudorandom events. Journal of Scientific Exploration, 3, 65-79.

Radin, D. I. (1982). Experimental attempts to influence pseudorandom number sequences. Journal of the American Society for Psychical Research, 76, 359-374.

Radin, D. I. (1988). Effects of a priori probability on psi perception: Does precognition predict actual or probable futures? Journal of Parapsychology, 52, 187 - 212.

Radin, D. I. (1989). Searching for "signatures" in anomalous human-machine interaction research: A neural network approach. Journal of Scientific Exploration, 3, 185-200.

Radin, D. I. (1990). Testing the plausibility of psi-mediated computer system failures. Journal of Parapsychology, 54, 1-19.

Radin, D. I. (1990-1991). Statistically enhancing psi effects with sequential analysis: A replication and extension. European Journal of Parapsychology, 8, 98 - 111.

Radin, D. I. (1993). Neural network analyses of consciousness-related patterns in random sequences. Journal of Scientific Exploration. 4, 355-374.

Rosenthal, R. (1984). Meta-analytic procedures for social research. Beverly Hills, CA: Sage Publications.

Shneiderman, B. (1987). Designing the user interface: Strategies for effective human-computer interaction. Reading, MA: Addison-Wesley.

Smith, W. R., Dagle, E. F., Hill, M. D., and Mott-Smith, J. (1963). Testing for extrasensory perception with a machine. Bedford, MA: Hanscom Field. US Air Force Cambridge Research Laboratories.

Stiffler, J. (October 1981). How computers fail. IEEE Spectrum, 44-46.

Swets, J. A. & Druckman, D. (1988). Enhancing human performance: Issues, theories, and techniques. Washington, DC: National Academy Press.

US Library of Congress (1983), Congressional Research Service, Research into "psi" phenomena: Current status and trends of congressional concern. (Compiled by C. H. Dodge).

Utts, J. (1991). Replication and meta-analysis in parapsychology. Statistical Science, 6 (4), 363-403.

This article originally appeared in the Journal of Scientific Exploration

21st, The VXM Network, https://vxm.com

s