Nehaveigur

Deductive and Inductive Predictions: The latter are often more powerful

Any system that shows overall computational irreducibility there must inevitably be an infinite number of “pockets of computational reducibility”, in effect associated with “simplifying features” of the behavior of the system.

Stephen Wofram

There are two ways to make predictions: Deductive and inducutive. Predictions based on first are what I call deductive predictions, while those based on analogy or similarity are inductive predictions. Derek Lowe, on his blog, In the Pipeline, has a good example:

If the Protein Folding Problem was set by God to force the human race to really understand the mechanisms behind protein structure, then, well. . . we cheated on the exam. Because we don’t understand those factors well enough to calculate such structures de novo, just using what we know about hydrogen bonds, torsional angles, steric hindrance, pi-stacking interactions and all the other things that add up energetically to stable protein conformations. I mean, we know a lot about those things, but we don’t know enough – not enough to take a big sample of protein sequences and derive from first principles the likely protein structures they’ll form. Most definitely we can’t do something like that with anything like the speed and success rate of the pattern-matching provided by AlphaFold-type machine learning.

We used the large and well-curated pile of structural data in the PDB to take that shortcut, and it has turned out that proteins use many of the same tricks and patterns and combinations often enough that this approach really has worked out well.

In Lowes’ example, trying to predict protein folding based on fundamental physics would be a deductive prediction, while using the folds of similar sequences, as AlphaFold does, is inductive prediction.

In the same post, Lowe also shares his views on writing with and without the help of AIs.