The Precautionary Principle

Published by

on

Or: Should you really never change a working system?

Given an innovation whose future positive and negative impact are uncertain, which position should be taken regarding its adoption?

The precautionary principle states that proponents of the innovations should have to prove its harmlessness before it is introduced. That’s because we know that the world currently more or less works, whilst we can’t be sure that it will continue to work once a major innovation has been introduced. A good example were genetically modified crops around 30 years ago. Back then, we knew that our food supply was relatively secure without GM crops, but we didn’t know for sure what the negative side effects of GM crops may be.

In contrast, what I call damage by inaction is the idea that we should try to improve our world unless we are certain that it is already optimal. Do we believe that random mutation and blind breeding by our ancestors has produced the best possible crops? Probably not, and genetic modification is therefore likely to be beneficial.

Nassim Nicholas Taleb, an intellectual whose substance if not style I appreciate, is a proponent of the precautionary principle. In Antifragile, he argues that many systems naturally work well and that without fully understanding them we are likely to make them worse by fiddling with them. One example he mentions is our feet, which may be better served by minimalist shoes that let them work in a natural way than by padded or high-heeled shoes that, although intended to support our feet, may actually cause problems over the long run (sorry).

Taleb is particularly disdainful of transhumanists, who believe that we should aim to improve ourselves by technological means. One area of improvement that is important to transhumanists is life span, which they believe should be extended, if possible, to infinity.

In an essay, Nick Bostrom and Toby Ord ask whether, if there was a medically safe intervention enhancing human intelligence, it would be a good idea to do so. They argue that there is a cognitive bias towards interventions like this, which they call status quo bias. Status quo bias causes people to inappropriately favor the way things are over alternative scenarios.

They suggest that status quo bias can be detected by reversing the original question. For their example, this would mean asking if a decrease in intelligence would be a good thing. If the answer to this is negative as well, the implication is that the current level of intelligence is optimal, since it shouldn’t be either increased or decreased.

How likely is it that the current level of human intelligence is really optimal, especially taking into account that it variable in place and time? Not very. Therefore, opposition against interventions that either increase or decrease human intelligence is indicative of status quo bias.

Of course, the same applies to any other parameter. If someone argues that the parameter shouldn’t be changed in one direction, but also not in the other direction, they have the justify why the current level is optimal.

An optimal level may be observed for parameters on which evolution has acted (e.g. the size of the heels of our feet), bit in other cases this is unlikely (e.g. when evolution, being a process of random mutation and selection, could not possibly have found the best solution).

If human lifespan shouldn’t be extended, should it be shortened instead? If not, this implies that it is ideal right now, which is unlikely based on the evolutionary argument: Life expectancy now is more than twice that in the ancestral environment. This does however not dismantle Taleb’s argument that changing a working system can have disastrous negative effects. However, it does weaken his argument against the humanist ideal of extending lifespan.

The two heuristics – the precautionary principle and damage by inaction – are equivalent to a conservative and a progressive view of technological progress, respectively. The conservative precautionary principle applies in cases where evolution is likely to have found an optimal solution that is still valid in today’s environment, whilst in all other cases it is not at all clear which heuristic should be preferred. Because they are contradictory, this makes them useless as a basis for decision making.