Assembly Theory

Published by

on

In April, Sara Imari Walker gave a talk (video, essay) at the Long Now Foundation. It was about Assembly theory, developed by chemist Lee Cronin and extended by Walker and others.

I still struggle to decide if she and Cronin are on to something. A combination of causal depth (did this structure require many causal steps to assemble), modularity (did it re-use sub-structures), multiplicity (are there many instances of this structure) and contingency (could this structure have turned out differently or is it an attractor) seems to indeed be a better measure of life-like complexity than e.g. mere algorithmic complexity. Below is a distillation of an hour-long session on Assembly theory I had with ChatGPT-5.

Q: What is Assembly Theory (AT)?
A: AT measures the Assembly Index (AI) of an object: the minimal number of joining steps needed to build it from primitives. High AI implies a long causal history, while low AI objects (like crystals or water) can form directly from simple rules.

Q: How is this different from Shannon or Kolmogorov complexity?
A: Shannon entropy measures unpredictability in symbol distributions; Kolmogorov complexity measures shortest description length. Both misclassify random noise as “maximally complex.” AT instead asks: what causal steps must have happened? Noise has low AI (one stochastic step), while evolved structures like proteins have high AI (many nested joins).

Q: What role does modularity play?
A: Life’s molecules are built through nested reuse of substructures (e.g. protein domains, codon motifs). This ratcheting reuse drives AI upward. Crystals also repeat, but only by simple rules, so their AI remains shallow.

Q: How does AT distinguish noise from life-like complexity?
A: By combining three filters:

  1. AI (causal depth): noise = shallow, evolved molecules = deep.
  2. Modularity: evolved structures reuse subassemblies hierarchically; noise does not.
  3. Multiplicity: abundant recurrence of the same high-AI object implies copying/selection, not chance.

Q: What about contingency?
A: In AT, “contingency” means the structure is not uniquely forced by physics (like a crystal lattice), but is one among many possible outcomes frozen by history. Even if life tends toward convergent attractors (e.g. eyes, ATP, the genetic code), the specific realization is still contingent because alternatives exist and sometimes persist (variant genetic codes).

Q: Doesn’t this reduce to a new checklist, like traditional definitions of life?
A: Classical definitions use multiple hallmarks (metabolism, replication, evolution). AT reframes them under a single principle: objects with measurable causal depth, modular reuse, and abundant recurrence are almost certainly life-derived. It still looks like “multiple criteria,” but proponents argue they all flow from one axis: assembly depth.

Q: How does AT treat viruses, junk DNA, or abiotic soups?
A:

  • Viruses: Their particles have high AI and abundance, so they’re life-derived, though not autonomous generators of AI.
  • Junk DNA: High AI and abundant — a biosignature — but AI doesn’t guarantee “function,” only history.
  • Abiotic soups: Each molecule is low AI; the mixture itself isn’t a single object, so the soup isn’t “high AI.”

Q: Wouldn’t a Penrose tiling or quasicrystal count as high AI?
A: No. They look complex but are generated by short rules. AT assigns them low AI, since their complexity is rule-determined rather than history-dependent.

Q: Does AT only apply to molecules?
A: Cronin and Walker argue it’s general: it applies to any object — molecules, artifacts, language, culture. But so far, AT is only experimentally testable in chemistry (via mass spectrometry). Extensions to artifacts or culture remain conceptual.

One response to “Assembly Theory”

  1. More on Assembly Theory – Nehaveigur Avatar

    […] recently posted on Assembly Theory. I’ve read up on it some more since then and found this review of the theory by philosopher […]

    Like

Previous Post
Next Post