by John Horgan, senior writer
Champagne and big ideas are bubbling at the
Before everyone tucks into the filet mignon, David Liddle,
a computer entrepreneur who chairs the board of trustees, reviews the institute's
accomplishments. "There is a lot to be proud of," he says. There
certainly is, at least from a public-relations standpoint. The institute is not
large: it supports only six full-time researchers in
What Liddle does not say is that even some
scientists associated with the institute are beginning to fret over the gap
between such rhetoric and reality. Take Jack D. Cowan, a mathematical biologist
Cowan finds some work at
Some residents blame the media for the exaggerated claims associated with
the institute. "Ninety percent of it came from journalists," Arthur
asserts. Yet the economist cannot help but play the evangelist. "If
The grandest claim of Santa Fe'ers is that they
may be able to construct a "unified theory" of complex systems. John
H. Holland, a computer scientist with joint appointments at the
Some workers now disavow the goal of a unified theory. "I don't even
know what that would mean," says Melanie Mitchell, a former student of
Scientists familiar with the history of other would-be unified theories [see
box on pages 108 and 109] are not sanguine about the prospects for their
The problems of complexity begin with the term itself. Complexologists have struggled to distinguish their field from a closely related pop-science movement, chaos. When all the fuss was over, chaos turned out to refer to a restricted set of phenomena that evolve in predictably unpredictable ways. Various attempts have been made to provide an equally precise definition of complexity. The most widely touted definition involves "the edge of chaos." The basic idea is that nothing novel can emerge from systems with high degrees of order and stability, such as crystals. On the other hand, completely chaotic systems, such as turbulent fluids or heated gases, are too formless. Truly complex things-amoebae, bond traders and the like-appear at the border between rigid order and randomness.
Most popular accounts credit the idea to Christopher Langton and his co-worker Norman H. Packard (who coined the phrase). In experiments with cellular automata, they concluded that a system's computational capacity-that is, its ability to store and process information-peaks in a narrow regime between highly periodic and chaotic behavior. But cellular-automaton investigations by two other SFI researchers, James P. Crutchfield and Mitchell, did not support the conclusions of Packard and Langton. Crutchfield and Mitchell also question whether "anything like a drive toward universal-computational capabilities is an important force in the evolution of biological organisms." Mitchell complains that in response to these criticisms, proponents of the edge of chaos keep changing their definition. "It's a moving target," she says.
Other definitions of complexity have been proposed-at least 31, according to
a list compiled several years ago by Seth Lloyd of the Massachusetts Institute
of Technology, a physicist and
The Poetry of Artificial Life
Such problems highlight the awkward fact that complexity exists, in some murky sense, in the eye of the beholder. At various times, researchers have debated whether complexity has become so meaningless that it should be abandoned, but they invariably conclude that the term has too much public-relations value. Complexologists often employ "interesting" as a synonym for "complex." But what government agency would supply funds for research on a "unified theory of interesting things"? (The Santa Fe Institute, incidentally, will receive about half its $5-million 1995 budget from the federal government and the rest from private benefactors.)
Complexologists may disagree on what they are studying, but most concur on how they should study it: with computers. This faith in computers is epitomized by artificial life, a subfield of complexity that has attracted much attention in its own right. Artificial life is the philosophical heir of artificial intelligence, which preceded it by several decades. Whereas artificial-intelligence researchers seek to understand the mind by mimicking it on a computer, proponents of artificial life hope to gain insights into a broad range of biological phenomena. And just as artificial intelligence has generated more portentous rhetoric than tangible results, so has artificial life.
As Langton proclaimed in the inaugural issue of the journal Artificial Life last year, "Artificial life will teach us much about biology-much that we could not have learned by studying the natural products of biology alone-but artificial life will ultimately reach beyond biology, into a realm we do not yet have a name for, but which must include culture and our technology in an extended view of nature."
Langton has promulgated a view known as
"strong a-life." If a programmer creates a world of
"molecules" that-by following rules such as those of
chemistry-spontaneously organize themselves into entities that eat, reproduce
and evolve, Langton would consider those entities to
be alive "even if it's in a computer." Inevitably, artificial life
has begotten artificial societies. Joshua M. Epstein, a political scientist who
Artificial life-and the entire field of complexity-seems to be based on a seductive syllogism: There are simple sets of mathematical rules that when followed by a computer give rise to extremely complicated patterns. The world also contains many extremely complicated patterns. Conclusion: Simple rules underlie many extremely complicated phenomena in the world. With the help of powerful computers, scientists can root those rules out.
This syllogism was refuted in a brilliant paper published in Science
last year. The authors, led by philosopher Naomi Oreskes
"Like a novel, a model may be convincing-it may ring true if it is consistent with our experience of the natural world," Oreskes and her colleagues state. "But just as we may wonder how much the characters in a novel are drawn from real life and how much is artifice, we might ask the same of a model: How much is based on observation and measurement of accessible phenomena, how much is based on informed judgment, and how much is convenience?"
Numerical models work particularly well in astronomy and physics because
objects and forces conform to their mathematical definitions so precisely.
Mathematical theories are less compelling when applied to more complex
phenomena, notably anything in the biological realm. As the evolutionary
biologist Ernst Mayr of
Langton, surprisingly, seems to accept the possibility that artificial life might not achieve the rigor of more old-fashioned research. Science, he suggests, may become less "linear" and more "poetic" in the future. "Poetry is a very nonlinear use of language, where the meaning is more than just the sum of the parts," Langton explains. "I just have the feeling that culturally there's going to be more of something like poetry in the future of science."
A Critique of Criticality
A-life may already have achieved this goal, according to the evolutionary
biologist John Maynard Smith of the
Not all complexologists accept that their field is
doomed to become soft. Certainly not Per Bak, a
physicist at Brookhaven National Laboratory who is on the
Bak and others have developed what some consider to be the leading candidate for a unified theory of complexity: self-organized criticality. Bak's paradigmatic system is a sandpile. As one adds sand to the top of the pile, it "organizes" itself by means of avalanches into what Bak calls a critical state. If one plots the size and frequency of the avalanches, the results conform to a power law: the probability of avalanches decreases as their size increases.
Bak notes that many phenomena-including earthquakes, stock-market fluctuations, the extinction of species and even human brain waves-display this pattern. He concludes that "there must be a theory here." Such a theory could explain why small earthquakes are common and large ones uncommon, why species persist for millions of years and then vanish, why stock markets crash and why the human mind can respond so rapidly to incoming data.
"We can't explain everything about everything, but something about everything," Bak says. Work on complex systems, he adds, will bring about a "revolution" in such traditionally soft sciences as economics, psychology and evolutionary biology. "These things will be made into hard sciences in the next years in the same way that particle physics and solid-state physics were made hard sciences."
In his best-seller Earth in the Balance, Vice President Al Gore said Bak's theory had helped him to understand not only the
fragility of the environment but also "change in my own life." But
Sidney R. Nagel of the
Bak retorts that other sandpile
experiments confirm his model. Nevertheless, the model may be so general and so
statistical in nature that it cannot really illuminate even those systems it
describes. After all, many phenomena can be described by a Gaussian or bell
curve. But few scientists would claim that human intelligence scores and the
apparent luminosity of galaxies must derive from common causes. "If a
theory applies to everything, it may really apply to nothing," remarks the
Another skeptic is Philip W. Anderson, a condensed-matter physicist and
Nobel laureate at
"More is different" became a rallying cry for chaos and
Kauffman says he shares the concern of his former teacher John Maynard Smith about the scientific content of some artificial-life research. "At some point," he explains, "artificial life drifts off into someplace where I cannot tell where the boundary is between talking about the world-I mean, everything out there-and really neat computer games and art forms and toys." When he does computer simulations, Kauffman adds, he is "always trying to figure out how something in the world works, or almost always."
Kauffman's simulations have led him to several conclusions. One is that when a system of simple chemicals reaches a certain level of complexity or interconnectedness (which Kauffman has linked both to the edge of chaos concept and to Bak's self-organized criticality), it undergoes a dramatic transition, or phase change. The molecules begin spontaneously combining to create larger molecules of increasing complexity and catalytic capability. Kauffman has argued that this process of "autocatalysis"-rather than the fortuitous formation of a molecule with the ability to replicate and evolve-led to life.
"Obscurantism and Mystification"
Kauffman has also proposed that arrays of interacting genes do not evolve randomly but converge toward a relatively small number of patterns, or "attractors," to use a term favored by chaos theorists. This ordering principle, which Kauffman calls "antichaos," may have played a larger role than did natural selection in guiding the evolution of life. More generally, Kauffman thinks his simulations may lead to the discovery of a "new fundamental force" that counteracts the universal drift toward disorder required by the second law of thermodynamics.
In a book to be published later this year, At Home in the Universe, Kauffman asserts that both the origin of life on the earth and its subsequent evolution were not "vastly improbable" but in some fundamental sense inevitable; life, perhaps similar to ours, almost certainly exists elsewhere in the universe. Of course, scientists have engaged in interminable debates over this question. Many have taken Kauffman's point of view. Others, like the great French biologist Jacques Monod, have insisted that life is indeed "vastly improbable." Given our lack of knowledge of life elsewhere, the issue is entirely a matter of opinion; all the computer simulations in the world cannot make it less so.
Kauffman's colleague Murray Gell-Mann, moreover, denies that science needs a new force to account for the emergence of order and complexity. In his 1994 book, The Quark and the Jaguar, Gell-Mann sketches a rather conventional-and reductionist-view of nature. The probabilistic nature of quantum mechanics allows the universe to unfold in an infinite number of ways, some of which generate conditions conducive to the appearance of complex phenomena. As for the second law of thermodynamics, it permits the temporary growth of order in relatively isolated, energy-driven systems, such as the earth.
"When you look at the world that way, it just falls into place!" Gell-Mann cries. "You're not tortured by these strange questions anymore!" He emphasizes that researchers have much to learn about complex systems; that is why he helped to found the Santa Fe Institute. "What I'm trying to oppose," he says, "is a certain tendency toward obscurantism and mystification."
Maybe complexologists, even if they cannot create a science for the next millennium, can limn the borders of the knowable. The Santa Fe Institute seemed to raise that possibility last year when it hosted a symposium on "the limits of scientific knowledge." For three days, a score of scientists, mathematicians and philosophers debated whether it might be possible for science to know what it cannot know. After all, many of the most profound achievements of 20th-century science-the theory of relativity, quantum mechanics, Goedel's theorem, chaos theory-prescribe the limits of knowledge.
Some participants, particularly those associated with the institute,
expressed the hope that as computers grow in power, so will science's ability
to predict, control and understand nature. Others
demurred. Roger N. Shepard, a psychologist at