Cellular Automata

Автор: Пользователь скрыл имя, 09 Июня 2013 в 22:18, доклад

Описание работы

Cellular automata (henceforth: CA) are discrete, abstract computational systems that have proved useful both as general models of complexity and as more specific representations of non-linear dynamics in a variety of scientific fields. Firstly, CA are (typically) spatially and temporally discrete: they are composed of a finite or denumerable set of homogeneous, simple units, the atoms or cells. At each time unit, the cells instantiate one of a finite set of states.

Работа содержит 1 файл

Cellular automato.docx

— 1.11 Мб (Скачать)

 
Fig. 5

At time step 250 (not shown in Fig. 5), the grey areas disappear and all cells end up white, i.e., the classification made by the automaton is correct. A high-level description of what is going on would have it that the white, black and grey regions that are “expanding”, “contracting”, “moving”, and by so doing manage to carry signals along the lattice. But how are we to explain the emergent computation CA of this kind perform? Building on previous works on computation theory applied to CA (Hanson, Crutchfield 1992, Crutchfield, Hanson 1993), Mitchell and Crutchfield filtered out from the original diagram what they call “domains”, i.e., dynamically homogenous spatial regions (Crutchfield, Mitchell 1995, p. 10745):

 
Fig. 6

Although in this case the “domains” are manifest, it is crucial to point out that their definition is rigorously mathematical. In fact, the whole process of domain-detection can be carried out algorithmically (see Hanson, Crutchfield 1992 for details). When the boundary of a domain remains spatially localized over time, then the domain becomes a “particle”:

Embedded particles are a primary mechanism for carrying information (…) over long space-time distances. (…) Logical operations on the signals are performed when the particles interact. The collection of domains, domain walls, particles and particle interactions for a CA represents the basic information processing elements embedded in the CA's behavior—the CA's “intrinsic” computation. (Crutchfield, Mitchell 1995, p. 10744)

There are five stable particles (termed α, γ, δ, ε, µ) and one unstable particle (β) for this automaton: their interaction (annihilation, decay, reaction) supports the emergent logic of the system. The two circles in the image above are examples of what may happen during an interaction. In the first case, α + δ → µ, a spatial configuration representing high, low, and then ambiguous densities is mapped to a high-density signal µ; in the second, µ + γ → α, a spatial configuration representing high, ambiguous, and then low density is mapped to an ambiguous-density signal α (Crutchfield, Mitchell 1995, p. 10745). The whole computational mechanics is worth exploring in more detail, but we can already safely generalize to the basic philosophical point of this and related works. According to O'Connor and Wong 2006, in the context of dynamic systems and the study of complexity emergence is characterized by most contemporary philosophers “strictly in terms of limits on human knowledge of complex systems. Emergence for such theorists is fundamentally an epistemological, not metaphysical, category” (O'Connor, Wong 2006, Sec. 2). But the Crutchfield-Mitchell approach suggests a different perspective. Firstly, the emergent computational properties in CA can in an important sense be objectively defined (see Crutchfield 1994a and the more accessible Crutchfield 1994b): although it is customary in this setting to talk of emergent computation being “in the eye of the beholder”, the detection and classification of patterns is itself algorithmic, not just phenomenological. Secondly, Crutchfield defines CA-emergence as a paradigmatic case of intrinsic emergence: the emerging patterns “are important within the system” (Crutchfield 1994b, p. 3), not merely important for an observer outside the system. More precisely, they are mathematically grounded on the basic features of the system, despite not being explicitly mentioned in its standard abstract characterization (Crutchfield mentions as non-intrinsic emergent phenomena the patterns in the Belousov-Zhabotinsky reaction and the energy recurrence in anharmonic oscillator chains reported by Fermi, Pasta and Ulam—see Crutchfield 1994b).

Crutchfield infers from this that many cases of emergence are indeed not reducible to some phenomenological interaction with an observer. They are genuine instances of an intrinsic phenomenon, not the results of some human-biased discovery (Crutchfield 1994b, p. 2). If emergence were not intrinsic, scientific activity would indeed be a subjective enterprise of “pattern discovery”: 

Why? Simply because emergence defined without this closure leads to an infinite regress of observers detecting patterns of observers detecting patterns... (Crutchfield 1994b, p. 10)

Summarizing Crutchfield's work, Miller and Page say that the concept of emergence “has thus made the transition from metaphor to a measure, from something that could only be identified by ocular magic to something that can be captured using standard statistics”  (Miller, Page 2007, p. 234).

Such remarks may sound philosophically naive to a trained epistemologist. It is not obvious, for instance, that the existence of a mathematical or specifically algorithmic emergent pattern would block a supposed regress entailed by a non-mathematical, or “merely phenomenological” emergence. Why would science as an activity of (occasionally) non-algorithmic pattern discovery be a merely subjective enterprise? If these claims can be re-phrased in a philosophically sophisticated way, though, they may challenge standard definitions of weakly emergent properties (in the sense of Chalmers 2002). Teller 1992, Clark 1996 and Bedau 1997, for instance, all run together instances of “pattern discovery”—a subjective, observer-dependent activity—with instances of intrinsic emergence—a phenomenon that, as we have just seen, can be characterized in the context of CA as objective and statistically significant (see the relevant Section of the entry on Emergent properties). Specifically, Bedau 1997 defines a macro-state emergent just in case it can be derived from knowledge of the system's micro-components only by direct simulation of the overall system evolution. Now, while the definition may work pretty well in the case of completely chaotic systems, it does not sit well with such CA as Rule φ17083. According to the proposed definition, the answer to the classification problem given by Rule φ17083 is an emergent phenomenon just in case the only way to go from t0 to tn is by explicitly simulating the system evolution.

t0

 

tn


As it turns out, however, this is not the case. Using Crutchfield's particle model it is possible to predict the result of the classification by simply making particle-calculations, without bothering about the underlying dynamics (Hordijk, Crutchfield, Mitchell 1996). Here then, the emerging computation in the CA would not be a case of emergence for Bedau.

What are the ontological consequences of this? While emerging patterns in CA have been examined both by reductionist (Dennett 2003) and emergentist philosophers (Thompson 2007), it is fair to say that the CA literature so far has not significantly contributed to the ongoing philosophical debate on the purely ontological side of reductionism. CA patterns that are “objectively” detected via computation, as per the Mitchell-Crutchfield approach, are not ipso facto new primitives to be included in an ontology. It may well be that features of a CA that are objective, in the sense of not depending on the interaction with an observer, are nevertheless ontologically reducible to more basic entities via suitable definitions (see Kim 1999, Dennett 1991).

3.2 CA and Free Will

Philosophers have debated the relationship between determinism and free will for over two millennia. Two opposite stances can be taken towards the problem: compatibilism maintains that free will is compatible with a deterministic world, which incompatibilism denies (see the entries on Free will and Compatibilism). Surprisingly enough, Daniel Dennett and Stephen Wolfram both argued that adopting the CA perspective can provide a solution, or perhaps a dissolution, of the longstanding mystery of free will.

Daniel Dennett has claimed that Life “renders vivid and robust a set of intuitions that are otherwise absent” (Dennett 2003, p. 40). According to Dennett, a major obstacle to accepting compatibilism is our deeply rooted persuasion that determinism implies inevitability, thus excluding free will (Dennett 2003, p. 25). We may thus make compatibilism palatable by exhibiting an intuitive counterexample to that conviction: a deterministic world in which, however, not everything is inevitable, i.e., something is avoidable (Dennett 2003, p. 56). Dennett maintains that CA can do this. He takes Life as a vivid illustration of how, in a deterministic but complex enough world, we can abstract away from the bottom level and the micro-laws deterministically governing it, and describe what is happening by taking the emergent level seriously. Recall the eater-glider dynamics:

t0

t2

t4


At t0, an observer aiming at predicting the evolution of this space-time region has basically two choices: she can take into account (what Dennett calls) the physical level, and compute pixel by pixel the value of each cell state at each time step; or, she can focus on the design level, and employ high-level concepts such as those of glider and eater to ground her predictions (Dennett 2003, p. 39). The first option is perfectly deterministic, but has a flaw: it is time consuming, in such a way that, by the time you have made the required computation, the world has already evolved (this is especially true of a universal CA—as we have already hinted at above, and shall expand on soon). The second option is much faster: you know without much calculation what is going to happen to a glider meeting an eater. The predictions though, cannot be 100% reliable:

Whereas at the physical level, there are absolutely no exceptions to the general law, at the design level our generalizations have to be hedged: they require “usually” clauses (…). Stray bits of debris from earlier events can “break” or “kill”  one of the objects in the ontology at this level. Their salience as real things is considerable, but not guaranteed. (Dennett 2003, p. 40)

Dennett's point is that avoidance itself is a high-level concept. As such, it is compatible with a deterministic bottom level (because the concepts at the emergent level are, by design, independent from the micro-laws). The physical description and the design description of Life are different interpretations of the same basic ontology, namely, the sparse ontology of CA. While in theory we could avoid the introduction of emergent concepts, in practice it is only by speaking of gliders, movements and avoidance that we can make sense of the evolution of the system (Dennett 2003, pp. 43–44). Even without knowing Life's physics, we could do a good job if we predicted the future by mentioning only high level patterns. Life is just a toy universe but, Dennett claims, these new intuitions are sufficient to see that, in some deterministic worlds, something is avoidable. For instance, it is true at the design level that gliders actually avoid eaters. Thus, the inference from determinism to inevitability can be blocked.

A standard reply to Dennett's argument consists in denying that Life-avoidance is “real” avoidance. Dennett himself puts a version of this argument in the mouth of Conrad, a fictional skeptic philosopher discussing Dennett's idea throughout his book:

It may look like avoidance, but it's not real avoidance. Real avoidance involves changing something that was going to happen into something that doesn't happen. (Dennett 2003, p. 58)

Rephrasing Dennett's example, we can identify an ambiguity in Conrad's argument. Imagine that a baseball is going to hit you in the face—but you dodge it: a clear case of real human avoidance. In what sense was the baseball “going to” hit you in the face? (Dennett 2003, p. 59) One might say that it was never really going to hit you, precisely because it triggered the reaction of whatever “avoidance system” you have. What is the difference between this avoidance and Life-avoidance? For Dennett, this is not a difference in kind, but in complexity: gliders and humans both have avoidance systems, but human systems are much more sophisticated. The choice of a universal CA as a toy universe allows us to draw a stronger conclusion: since we know that Life is equivalent to a universal Turing machine, as was explained above, some patterns in that universe may display avoidance systems at least as complex as ours. Dennett claims that compatibilism has thus won the first round: “you agree that (…) I've shifted the burden of proof: there shall be no inferring inevitability in any sense from determinism without mounting a supporting argument” (Dennett 2003, p. 61).

Stephen Wolfram addresses the phenomenon of free will in his book on CA:

From the discoveries in this book it finally now seems possible to give an explanation for this [free will]. And the key, I believe, is the phenomenon of computational irreducibility. (Wolfram 2002, p. 750)

We have introduced the issue of computational (or algorithmic) irreducibility when we explained the philosophical consequence of a universality proof for an automaton, namely that, although a system follows definite underlying laws, “its overall behavior can still have aspects that fundamentally cannot be described by reasonable laws”  (Wolfram 2002, p. 750). The only way to predict the system's behavior is to perform step by step all the micro-computations. In this “separation between the underlying rules for the system and its overall behavior”  (Wolfram 2002, p. 751) lies the secret of free will, since it seems that we attribute free will to a system just when “we cannot readily make predictions about the behavior of the system” (Wolfram 2002, p. 751). According to Wolfram, CA play a leading role in providing a new framework to understand the phenomenon. While explanations from chaos theory and quantum randomness have recently been proposed (see the entry on Chaos), “nothing like this is actually needed” (Wolfram 2002, p. 752). By observing CA, we can understand how something with simple and definite micro-rules, like those governing our neurons, can produce a behavior free of obvious rules: “the crucial point is that this happens just through the intrinsic evolution of the system—without the need for any additional input from outside or from any sort of explicit source of randomness”(Wolfram 2002, p. 752). Wolfram's point is similar to some of Dennett's remarks, namely: taking some sort of “design stance”, Wolfram suggests that one can talk about a cellular automaton as it just “decides” to do this or that—“thereby effectively attributing to it some sort of free will” (Wolfram 2002, p. 752).

How important are CA to these accounts? Dennett and Wolfram both use CA as intuition pumps. However, their positions seem to be slightly different. For while the former sees CA as a “useful toolkit” to develop intuitions and vividly illustrate his arguments (Dennett 2003, p. 40), the latter claims that CA provides a “new kind of intuition”, one that “no branch of everyday experience”  could provide (Wolfram 2002, p. 41).

The importance Wolfram attaches to CA seems to rely on a single, generic “indispensability argument” to the conclusion that CA justify the foundation of “a new kind of science” (Wolfram 2002, pp. 7–16). We can reconstruct this argument as follows:

(NKS1) The observation of CA evolution leads to a scientific discovery: “very simple rules produce highly complex behavior” (Wolfram 2002, p. 39). 
(NKS2) This discovery—the “new intuition”—promises to explain old and new phenomena and uncover important regularities. 
(NKS3) Therefore, the core of our current scientific practices (based on the “old intuition”) should be radically changed to accommodate this discovery.

(NKS1) is to be taken at face value. It entails that the concepts involved were not previously known. Wolfram talks in terms of “the single most surprising scientific discovery I have ever made” (Wolfram 2002, p. 27). Is (NKS1) true? For sure, the idea that a deterministic and simple system may produce unpredictable behavior started circulating in the scientific community well before Wolfram's work. Signs of what is now chaos theory can be traced backed to the 19th and early 20th century, e.g., to the work of Poincaré 1914. One might grant that CA allowed the discovery that simple systems may produce complex behavior via the proof that they have unpredictable emergent computational complexity (although this discovery itself, as outlined in our brief historical section, was not made but only greatly publicized by Wolfram). Why has this discovery not been made earlier? Wolfram's own diagnosis is twofold: on the one hand, we have the “engineering” intuition that to produce something complex we should build something complex—that's because this is how ordinary machines work. On the other, CA were not obviously connected to any established discipline, and therefore they were not studied in academic circles.

As for (NKS2), we just examined the case of free will. In Wolfram's perspective, free will looks just like another puzzling philosophical phenomenon explained away by the advance of (a new kind of) science. Just as life was puzzling before the discovery of the double helix, free will was puzzling before the discovery of a suitable scientific theory, one that can finally account for the separation between micro and macro level. Many reductionist philosophers are no strangers to this kind of argument. The concepts and intuitions used in contemporary philosophy are often rooted in current scientific practices. When groundbreaking discoveries are made, old arguments may be revised: troublesome concepts become harmless and new challenges are introduced. From this perspective, Wolfram's account of the free will problem may lack philosophical rigor, but it is a promising start to re-address the challenge armed with new scientific models of determinism and complexity—pretty much as Dennett does. While many successful applications are needed to fully vindicate (NKS2), our first assessment concludes that, at the very least, it is not obviously false. As for the “new regularities” promised by (NKS2), we will address them in the next section.

3.3 CA and the Philosophy of Computation

CA are computational systems that perform complex tasks on the basis of the collective behavior of simple items. What, if anything, does it tell us about the importance of computation for systems in nature?

Different conclusions have been drawn by practitioners in the field. Some have endorsed the more modest claim that the computational features of CA are important to understand and compare social, biological, and physical systems modeled by them; but others have taken CA to support the view that computation and information processing in a discrete setting lie at the very foundations of reality. We will explore the stronger claim in Section 3.4 below. As for the weaker claim, it is not possible here to address the general importance of computational properties across the sciences (see Mitchell 2009, pp. 169–185). We will focus instead on a specific, and controversial, principle put forward by Stephen Wolfram, the so-called “Principle of Computational Equivalence”:

There are various ways to state the Principle of Computational Equivalence, but probably the most general is just to say that almost all processes that are not obviously simple can be viewed as computations of equivalent sophistication. (Wolfram 2002, pp, 716–717)

The Principle is the most fundamental law of Wolfram's New Kind of Science, as well as a prominent regularity featured by (NKS2): “its implications are broad and deep, addressing a host of longstanding issues not only in science, but also in mathematics, philosophy and elsewhere” (Wolfram 2002, p. 715). Contrary to Wolfram's claims, the Principle may not be new to philosophy at all. That “all processes can be viewed as computations” (Wolfram 2002, p. 715) has often been argued for in the history of philosophy, just as the claim that universal computation is a widespread phenomenon in the natural world (see e.g. Searle 1992, Putnam 1988 and the entry on Computation in Physical Systems). However, Wolfram's explanation of the Principle includes two further, and more specific, statements: i) No natural system can compute more things than a universal digital computer (see Wolfram 2002, p. 730), that is, “universal computation is an upper limit on the complexity of computation” (Mitchell 2009, p. 157); and ii) The computations performed by natural systems are essentially equivalent in sophistication (see Wolfram 2002, pp. 719–726).

The first point is relevant once we compare digital computation with the idea of a computer working with real numbers in continuous time. It has been proved (see Moore 1996) that such a device would be able to compute more functions than a traditional Turing machine. However, proponents of a discrete space-time like Wolfram treat continuous processes as, in a sense, epiphenomenal, since they already have independent reasons (some of which will be addressed below) to believe in a fundamentally discrete universe. As for the second point, its main problem is that the interpretation of “equivalent sophistication” is not straightforward. For even assuming that universal computation is widespread, it does not seem to follow that all computation is equivalent in sophistication. Complexity scientists, even after having agreed with Wolfram on the importance of computation for social, biological and physical systems, and even on the extent to which universal computation is supported in nature, are puzzled by his claim:

I find it plausible that my brain can support universal computation (…) and that the brain of the worm C. elegans is also (approximately) universal, but I don't buy the idea that the actual computations we engage in, respectively, are equivalent in sophistication. (Mitchell 2009, p. 158)

It is not clear what to make of computational equivalence. Yes, there is a threshold in which systems are related to one another, but given the difficulty of moving among them, is this any more useful than saying that skateboards and Ferraris are equivalent means of moving about? (Miller, Page 2007, p. 232)

Miller and Page argue that, for all scientific purposes, “representations do matter, and what can be easily computed in one system is often difficult (but still possible) to compute in another”. Even if Wolfram is right when he claims that a simple automaton can calculate the first few prime numbers (Wolfram 2002, p. 640), the calculation we have to do to encode the input and decode the output is very complex:

This latter calculation could be much more “difficult”  to compute than the original problem, just as the complexity of a compiler can far exceed the complexity of the programs it produces. (Miller, Page 2007, p. 232)

Moreover, a rationale for studying CA is that they solve some computational problems more efficiently than standard computers (Ilachinski 2001, p. 8). Unless Wolfram's notion of “equivalent sophistication” simply means “they compute the same functions”—in which case, the claim is a truism—, the Principle cannot explain this empirical difference. The Principle may have a more substantive reading if understood as a metaphysical thesis about the universe in general, not as a scientific generalization having merely heuristic value. Under this stronger reading, the Principle is no more concerned with particular systems that can be fruitfully analyzed via computation theory, but with the fact that (different epistemological properties notwithstanding) the world itself is a computer. In a sense, any system would just be the emergent manifestation of a unique, underlying computational reality. Which naturally leads us to finally address the boldest question: What if the universe itself was a CA?

3.4 CA as Models of Reality

In the last fifty years various scientists (see Zuse 1982, Fredkin 1993, Wolfram 2002) have explicitly advanced a bold conjecture: that the physical universe is, fundamentally, a discrete computational structure. Everything in our world—quarks, trees, human beings, remote galaxies—is just a pattern in a cellular automaton, much like a glider in Life. We can single out two main reasons to investigate this provocative claim. First, the arguments put forward to support the view may be philosophically interesting in themselves. Second, the ontological structure of a CA-world can be fruitfully compared to existing metaphysical accounts. Let us take each point in turn.

The picture of nature as a CA is supported by an epistemological desideratum, i.e., having exact computational models of the physical world (see for instance the discussion of ontic pancomputationalism in Computation in Physical Systems). While this is certainly one of the arguments involved, it is not the only one and probably not the strongest. As Piccinini points out, even someone who shares that desire “may well question why we should expect nature to fulfill it” (Piccinini 2010, Section 3.4).

Ilachinski proposes a different “argument from epistemology” (Ilachinski 2001, pp. 661–2). Let us consider again the space-time diagram of Rule 110:

 
Fig. 7

Let us imagine we ignore its being generated by the iteration of a simple local rule, or even that it is an automaton. Then, says Ilachinski:

Информация о работе Cellular Automata