Cellular Automata

Автор: Пользователь скрыл имя, 09 Июня 2013 в 22:18, доклад

Описание работы

Cellular automata (henceforth: CA) are discrete, abstract computational systems that have proved useful both as general models of complexity and as more specific representations of non-linear dynamics in a variety of scientific fields. Firstly, CA are (typically) spatially and temporally discrete: they are composed of a finite or denumerable set of homogeneous, simple units, the atoms or cells. At each time unit, the cells instantiate one of a finite set of states.

Работа содержит 1 файл

Cellular automato.docx

— 1.11 Мб (Скачать)

At a first approximation, the Edge of Chaos hypothesis is confirmed: three of the eight neighbors are Class3, three are Class2, two are Class1: Rule 110 is the only Class4 in the table. To generalize these findings to the entire class of rules for one-dimensional CA, Langton introduced a parameter, λ, that applies to each φ: for k = 2, r = 1 (binary-state, unary-range) CA, λ(φ) can be computed simply as the fraction of entries of the transition rule table that are mapped to a non-zero output (see Langton 1990, p. 14 for the general definition). What this means in our case is simple: λ(φ) will be equal to the number of ones in the rule column—e.g. λ(φ) = 5/8 for φ = Rule 110 and λ(φ) = 1/2 for φ = Rule 46. Langton's major finding was that a simple measure such as λ correlates with the system behavior: as λ goes from 0 to 1, the average behavior of the systems goes from freezing to periodic patterns to chaos. Langton singled out 1/2 as the value of λ at which the average behavior first shows evidence of chaos: rules φ with λ(φ) ∼ 1/2 were highlighted as being on the Edge (see Miller and Page 2001, p. 133).

λ

All Rules

Chaotic Rules

Complex Rules

0

1

0

0

1/8

8

0

0

1/4

28

2

0

3/8

56

4

1

1/2

70

20

4

5/8

56

4

1

3/4

28

3

0

7/8

8

0

0

1

1

0

0


Chaotic and complex rules both have an average λ value around 1/2, thus apparently supporting the Edge of Chaos hypothesis. It is fair to say, though, that some have cast doubts on the explanatory role of parameter λ and the inferences drawn from it. In particular, the transition region of the Edge of Chaos seems to be itself complex. Miller and Page note that “there are multiple edges, not just a single one” (Miller and Page 2007, p. 133). Aggregate results do not hold when we analyze individual rules, even paradigmatic ones:

 

110

111

108

106

102

126

78

46

228

λ

5/8

3/4

1/2

1/2

1/2

3/4

1/2

1/2

3/4


As the table shows, among the Rule 110 neighbors, some chaotic rules φ have λ(φ) = 3/4, some cyclic ones have λ(φ) = 1/2 and, indeed, “every one of the rules classified as complex in this space has at least one chaotic neighbor with a lower λ value and one with higher value” (Miller, Page 2007, p. 135). Melanie Mitchell, Peter Hraber and James Crutchfield replicated Langton and Packard's experiments, reporting very different results (Mitchell, Hraber, Crutchfield 1994). In particular, they report that serious computational phenomena take place much closer to a chaotic λ(φ) = 1/2 than was previously thought. Apart from technical points, a main conceptual flaw in the original findings is the use of aggregate statistics, which are difficult to interpret in a high variance context: “if, instead, the hypotheses are concerned with generic, statistical properties of CA rule space—the ‘average’ behavior of an ‘average CA’ at a given λ—then the notion of ‘average behavior’ must be better defined” (Mitchell, Hraber, Crutchfield 1994, p. 14).

While it is fair to conclude that complex behavior does not lie at the Edge of Chaos taken in a simplistic sense (i.e., it is not straightforwardly correlated with the simple λ), the interest in the connection between computational capabilities and phase transitions in the CA rule space has been growing since then. We will consider such developments below, specifically in the context of CA and the philosophy of computation.

2.5 CA in More Dimensions: the Game of Life

Notwithstanding the computational interest of one-dimensional CA, philosophical issues have been discussed mostly in connection with two-dimensional CA. In fact the first CA, namely von Neumann's self reproducing automaton, inhabited a two-dimensional grid. Besides, two-dimensional CA are suitable for representing many physical, biological, and even human phenomena, ranging from the dynamics of perfect gases, to the movements of birds in a storm and soldiers on a battlefield. The most common configurations have either square or hexagonal cells, given their translational and rotational symmetries. Moving to two dimensions, of course, also expands the possibly interesting combinations of rules and neighborhoods. As for the latter, the two most common options in a grid of squares are the von Neumann neighborhood, where each cell interacts only with its four horizontal and vertical adjacent mates, and the Moore neighborhood, comprising all the eight immediately adjacent cells.

The best way to understand the two-dimensional perspective is again likely to be by way of example. We are therefore going to introduce the famous Game of Life (or, more briefly, Life) by John Conway (see Berkelamp, Conway, Guy 1982). In fact, Life fits well with our usual descriptive framework:

  1. 2-dimensional lattice of square cells in an orthogonal grid.
  2. Σ = {1, 0}, so |Σ| = 2 (we can intuitively picture 1 as the state of being alive for a given cell, 0 as the state of being dead, for reasons we are about to see).
  3. Each cell's neighborhood is composed of all its eight neighboring cells (the Moore neighborhood).
  4. Life's transition rule goes as follows. At each time step t exactly one of three things can happen to a cell:
    1. Birth: If the cell state at t − 1 was 0 (dead), the cell state becomes 1 (alive) if exactly three neighbors were 1 (alive) at t − 1;
    2. Survival: If the cell state at t − 1 was 1 (alive), the cell state is still 1 if either two or three neighbors were 1 (alive) at t − 1;
    3. Death: If the cell state at t − 1 was 1 (alive), the cell state becomes 0 (dead) if either fewer than two or more than three neighbors were 1 (alive) at t − 1 (cells can die of “loneliness” or “overpopulation”).

The simple Life rule makes of it a Class4 CA by Wolfram's taxonomy. In fact, this simple setting is a rich digital environment where periodic structures, stable blocks and complex moving patterns come into existence, even when the CA is set to start from a very simple initial configuration. As Conway remarked: “It's probable, given a large enough Life space, initially in a random state, that after a long time, intelligent, self-reproducing animals will emerge and populate some parts of the space” (cited in Ilachinski 2001, p. 131).

Life-fans in fact used to explore the CA's possible patterns of evolution, and to share their findings in what is known as Life's zoology (see e.g. Daniel Dennett's remarks in Dennett 2003, p. 41). We have collected a small gallery of samples together with snapshots of a typical simulation (for more pictures and animations, see the Internet references at the end). Gliders are the most popular among the basic Life inhabitants: a simple 5-bit structure, a glider can travel the Life grid in a 4-time step cycle and, thanks to its versatility, it is often among the basic building blocks of more elaborated structures.

 

Glider

 

t0

t1

t12


Toads are period 2 blinking configurations: together with Blinkers and Beacons they are the simplest “oscillators” of this universe.

 

Toad

 

t0

t1

t3


Eaters have the feature of devouring or eating other configurations, e.g. gliders, maintaining intact their form (and because of this, they play an important role for Life's computational abilities).

An Eater devouring a Glider

t0

t2

t4


A typical evolution of Life starting from random initial conditions may contain all of the above notable figures and much more. Some initial configuration may end up, even after few time steps, into static or simple periodic structures. Other configurations, though, can produce non-periodic, incresingly complex environments whose development can be unpredictable (even in the computational sense we are about to explore). As Ilachinski has suggestively conjectured from this:

Upon observing the seemingly unlimited complexity and variety of Life's evolving patterns, it becomes almost impossible to refrain from imagining, along with Conway, that, were the game really to be played on an infinite lattice, there must surely arise true living ‘life-forms’, perhaps themselves evolving into more complex, possibly sentient, ‘organisms’. (Ilachisnki 2001, p. 133)

t0

t10

t20

t30

t40

t175


The mathematical literature on CA does not refrain from describing the Life configurations using the same imaginative vocabulary we used, that is, in terms of items that are born, that live, move around, eat other figures, die, etc. The Life zoology, though, is in an important sense in the eye of the beholder: the universe these patterns inhabit can be more neutrally described as a collection of individual cells, each of which does not directly depend on what is happening on the macro-scale. Even more neutrally, Life can be described in the simple mathematical language of matrices and discrete sequences. But even if one is told the basic Life rule, one could hardly imagine the complexity it can generate—until one sees it. Life's reputation among scientists and philosophers arguably comes from its challenging naive intuitions about complexity, pattern formation, persistence, and continuity: as a toy universe we ourselves built, we feel we should know in advance what dynamics are allowed. This has been shown to be impossible, in a mathematically precise sense.

2.6 Life as a Universal Turing Machine

Like any other CA, Life can be considered as a computational device: an initial configuration of the automaton can encode an input string. One can let the system run and, at some point, read the current configuration as the result of the computation performed so far, decoding it into an output string. But exactly what can Life compute? It turns out that Life can compute everything a universal Turing machine can and therefore, given Turing's Thesis, function as a general purpose computer: a suitable selection of initial conditions can ensure that the system carry out arbitrary algorithmic procedures.

The proof of the universal computational capacities of Life presented in Berkelamp, Conway, Guy 1982 consists in showing that the basic building blocks or primitives of standard digital computation can be emulated by appropriate patterns generated by Life—in particular: (a) data storage or memorization, (b) data transmission requiring wires and an internal clock, and (c) data processing requiring a universal set of logic gates, like negation, conjunction and disjunction—an actual Turing machine was later explicitly implemented in Life by Paul Rendell (see Rendell's own web page and MathWorld).

This finding is not of great engineering importance, (no one would spend their time translating “24 + 26 / 13” into Life). However, it raises a conceptual issue about any universe sharing the capacity of producing and hosting universal computers: because of the aforementioned Halting Theorem, there cannot be any general algorithm to decide whether, given an initial configuration as input, Life will eventually die out or halt. It is in this sense that the evolution of the automaton is unpredictable. Given that the development of CA that are computationally universal cannot be predicted by direct mathematical analysis alone, it is no surprise that CA practitioners have adopted the language of philosophy and talked of phenomenological studies of CA. Here the automaton is realized as a computer software, and the observable emergent properties of its evolution are empirically registered as the computer simulation advances. In Wolfram's turn of phrase, Life is algorithmically irreducible: no algorithmic shortcut is available to anticipate the outcome of the system given its initial input. “Life—like all computationally universal systems—defines the most efficient simulation of its own behavior”(Ilachisnki 2001, p. 15). This raises the important philosophical questions of the limits of the predictability of any universe—possibly our actual universe—which is itself capable, just as Life is, of producing and hosting universal computers.

2.7 Further CA

Notwithstanding the historical and conceptual centrality of the CA described in this section, many important developments in the field could not be presented in the space allowed for this entry. It is possible to relax some of the assumptions in the general characterization of CA provided in Section 2.1 and get interesting results. The transition rule, for instance, can be probabilistic and take into consideration more than just one time step (see Richards, Meyer, Packard 1990: probabilistic automata are widely used to represent the stochastic dynamics of microphysical systems); the cell state updating can be asynchronous (see Ingerson, Buvel 1984); the lattice can be made of non-homogenous cells following different transition rules (see Kauffman 1984); even the discreteness constraint can be relaxed by having the set of states be the set of real numbers (see Wolfram 2002, p. 155–157).

CA are also being fruitfully used in connection to the issue of the thermodynamic limits of computation: is there a minimum amount of energy needed to perform a logical operation? Landauer (1961) argued that irreversible logical operations (i.e., operations that, not corresponding to bijections, cannot be run backwards for they entail some information loss) necessarily dissipate energy. The invention of the Fredkin reversible logical gate and of the Billiard Ball Model of reversible computation (Fredkin and Toffoli 1982) strengthened the importance of the link between universal reversible automata and the physical properties of computation (for an overview, see Ilachinski 2001, pp. 309–323).

Finally, it is worth mentioning that genetic algorithms have been used with CA to study how evolution creates computation (for a survey of important results, see Mitchell, Crutchfield, Das 1996). While the aforementioned sources further explore these possibilities, the sample CA models discussed so far will be sufficient for the philosophical arguments we are going to address henceforth.

3. CA and Philosophy

A growing number of CA-related philosophical arguments are being produced, both by professional philosophers and by scientists interested in the conceptual implications of their work. Among the interesting issues already addressed through the CA approach in the philosophical market are the structure of emergence, free will, the nature of computation, and metaphysics in a digital world.

3.1 CA and Emergence

CA can be considered a paradigmatic locus for the study of phenomena related to emergence. Very roughly, the problem of emergence has to do with the appearance in a system of higher-level features and properties which, on the one hand, are clearly rooted in the basic, low-level properties of the system; but on the other hand, do not seem to be simply or mechanically obtainable from the latter (for an introduction, see the entry on Emergent Properties). The literature on CA has variously addressed the issue. On the one hand, being a low-level simple and controllable environment, CA present themselves as a natural framework for tackling the problem in its purest form. On the other hand, CA researchers have recognized how the systemic and global features of a complex CA system can be hard to predict even with perfect knowledge of low-level entities and laws:

Over and over again we will see the same kind of thing: that even though the underlying rules for a system are simple, and even though the system is started from simple initial conditions, the behavior that the system shows can nevertheless be highly complex. (Wolfram 2002, p. 28)

Due to the local nature of a CA's operations, it is typically very hard, if not impossible, to understand the CA's global behavior (…) by directly examining either the bits in the lookup table or the temporal sequence of raw 1–0 spatial configurations of the lattice. (Hordijk, Crutchfield, Mitchell 1996, p. 2)

One can divide the problem of emergence into two separate questions, roughly corresponding to epistemological and ontological issues: How can we recognize emergence? What is the ontological status of the high-level properties and features? As a matter of historical fact, CA have been invoked mainly to address the former. Epistemological issues have often been raised in connection with complex systems in general. In their open agenda for complex social systems, Miller and Page include the following question: “Is there an objective basis for recognizing emergence and complexity?” (Miller, Page 2007, pp. 233–234). The issue of detecting emergence is connected to the conceptual problems of defining what an emerging feature is. Only once we know what we are looking for can we scan the space-time evolution of a system and recognize its patterns.

Unsurprisingly, the emergent properties attracting CA scholars' attention have mainly been computational properties, i.e., the features enabling the system to perform complex calculations. Already in 1985 Wolfram was asking, “What higher-level descriptions of information processing in cellular automata can be given?”(Wolfram 1985, p. 181). Also in this case, in order to introduce the formal work on emergent CA computation and to compare these findings with the available philosophical accounts, we can start with a specific example. This is the “classification problem”. Here we want to design a one-dimensional automaton that answers a simple question: Are there more white or black cells at a given time t0? Starting from any initial conditions with more white (black) cells, the ideal automaton will halt having all its cells turned white (black) after a given number of time steps (designing an automaton that always gives the right answer is not feasible in practice; so the performance is judged by the fraction of random initial conditions that are correctly classified).

t0

 

tn


The task would be trivial for a standard serial computational device, but is far from trivial in a CA—precisely because of its turning on an issue of emergence. The answer requires a global perspective (how many cells are white (black) in the lattice as a whole?). However, the cells work with local rules: it seems that no single cell can do the counting. The ideal automaton should find a way to aggregate information from several parts of its own lattice to give the final answer: a kind of “emergent computation” is needed to successfully solve this density classification task. It has been proved that no CA can solve this problem precisely (see Land and Belew 1995). Since, however, CA with larger neighborhoods achieve better results in general, genetic algorithms are used to efficiently search the solution space (genetic algorithms are computational analogues of genetic evolution, in which a computational problem is addressed by producing, combining and selecting solutions to it; the details of the procedure as applied to the classification problem are not important for our purpose, though; for an accessible presentation see Mitchell 2009, p. 22; for a general introduction see Mitchell 1998). The following is a diagram of the “Rule φ17083”, discovered by James Crutchfield and Melanie Mitchell (Crutchfield, Mitchell 1995). The CA implementing the rule starts from an initial state with more white cells:

Информация о работе Cellular Automata