← Back to Blog
Cellular Automata

Conway's Game of Life: Complexity from Simplicity

In 1970, mathematician John Horton Conway devised a cellular automaton so simple that its rules fit on an index card - yet so rich that it can simulate a universal Turing machine. The Game of Life, first introduced to the public through Martin Gardner's column in Scientific American [1], became one of the most studied objects in mathematics and computer science. It is not a game in the conventional sense: there are no players and no decisions. You set up an initial configuration of cells, press play, and watch what unfolds.

The Rules

The Game of Life takes place on an infinite two-dimensional grid. Each cell is either alive or dead. At each time step, every cell simultaneously updates according to four rules based on its eight neighbours (the Moore neighbourhood):

1. Underpopulation - A live cell with fewer than two live neighbours dies.
2. Survival - A live cell with two or three live neighbours survives.
3. Overpopulation - A live cell with more than three live neighbours dies.
4. Reproduction - A dead cell with exactly three live neighbours becomes alive.

That is it. No randomness, no external input, no exceptions. Every future state is entirely determined by the initial configuration. Yet from these four rules, an extraordinary zoo of behaviours emerges.

Still Lifes, Oscillators, and Spaceships

Patterns in the Game of Life fall into a natural taxonomy. Still lifes are stable configurations that do not change from one generation to the next - the block (a 2x2 square) and the beehive are common examples. Oscillators cycle through a fixed sequence of states: the blinker (period 2) and the pulsar (period 3) are well known. Spaceships are patterns that translate across the grid - the glider, discovered by Richard Guy in 1970, moves one cell diagonally every four generations and became the iconic symbol of the Game of Life [2].

More complex patterns include guns - stationary patterns that periodically emit spaceships. Bill Gosper discovered the first glider gun in 1970, winning a $50 prize offered by Conway. The existence of guns was the key to proving that the Game of Life can support unbounded growth and, ultimately, universal computation.

Von Neumann's Vision

The intellectual ancestry of the Game of Life traces back to John von Neumann, who in the 1940s sought to design a self-reproducing automaton. Working with Stanislaw Ulam, von Neumann developed the concept of cellular automata - lattices of identical finite-state machines that update synchronously based on local rules. Von Neumann's original construction used 29 states and was enormously complex [3]. Conway's contribution was to show that even a two-state automaton with a single, elegant rule set could produce behaviour of comparable richness.

Universality and Computation

Perhaps the most profound property of the Game of Life is that it is Turing-complete. Any computation that can be performed by a conventional computer can, in principle, be carried out by a sufficiently large Game of Life configuration. Logic gates can be built from streams of gliders; glider guns provide clocking signals; and collision-based interactions implement Boolean operations. This was established rigorously in the decades following Conway's original work, with constructions of explicit Turing machines within the Life grid.

Stephen Wolfram's systematic study of cellular automata in the 1980s placed the Game of Life within a broader classification framework, identifying it as a "Class IV" automaton - poised at the boundary between order and chaos, capable of supporting complex, long-lived transient structures [4].

Why It Matters for Biomimetics

The Game of Life is a paradigm example of emergence - complex global behaviour arising from simple local rules with no central coordination. This is exactly the principle underlying swarm intelligence, morphogenesis, and many other biological phenomena that biomimetics seeks to understand and replicate. Cells in the Game of Life have no awareness of the patterns they form, just as individual ants have no conception of the colony's logistics, and neurons have no knowledge of the thoughts they collectively produce.

The lesson is powerful: you do not need complex rules to produce complex behaviour. You do not need a central controller. You need only the right local interactions, applied consistently, and complexity emerges for free. This insight has influenced fields from artificial life to decentralised computing to self-organising materials.

You can explore the Game of Life yourself in the interactive simulation, where you can draw cells, choose classic patterns like the Gosper glider gun, and watch emergence happen in real time.

References

  1. Gardner, M. (1970). The fantastic combinations of John Conway's new solitaire game "life". Scientific American, 223(4), 120–123. doi:10.1038/scientificamerican1070-120
  2. Berlekamp, E. R., Conway, J. H. & Guy, R. K. (1982). Winning Ways for Your Mathematical Plays, Vol. 2. Academic Press. ISBN 978-0-12-091152-3
  3. Von Neumann, J. (1966). Theory of Self-Reproducing Automata (edited and completed by A. W. Burks). University of Illinois Press. ISBN 978-0-252-72733-7
  4. Wolfram, S. (1984). Universality and complexity in cellular automata. Physica D: Nonlinear Phenomena, 10(1–2), 1–35. doi:10.1016/0167-2789(84)90245-8