Skip To Content
Cambridge University Science Magazine
“There are known knowns…there are known unknowns…but there are also unknown unknowns—there are things we do not know we don’t know.” 

— Donald Rumsfeld

Donald Rumsfeld was talking about weapons of mass destruction, but his remarks are just as pertinent in other spheres of policy-making. In 2009, the swine flu pandemic killed at least 18,000 people; in 2010, the eruption of the Icelandic volcano Eyjafjallajökull severely disrupted air traffic in northwest Europe; and in 2011, the tsunami that followed the Japanese Tohoku earthquake killed tens of thousands and precipitated a nuclear crisis at the Fukushima power plant. Were these known unknowns or unknown unknowns? Should we have been able to predict these disasters? Or could we have been better prepared for the unpredictable?

Risk and uncertainty regularly crop up in the field of science and policy. Risk is the product of the likelihood of a certain event and the severity of its consequences should it occur. Last year, the Government Office for Science published the “Blackett Review of High Impact Low Probability Risks”. The review presents a number of ways in which such risks can be assessed and quantified. Unfortunately, though, it is not always possible to assess the relevant probabilities and consequences; what you’re left with is uncertainty. So what can we do in the face of such uncertainty?

One proposed solution is the precautionary principle, namely that in the absence of scientific consensus, the burden of proof that an action is not harmful falls on those taking the action. The principle has proved increasingly popular and is enshrined in much of international law, but it remains a slippery concept. As many as 14 different definitions of the precautionary principle have been found in the legal literature; as a result, different parties interpret and apply the principle in different ways. A nagging problem also remains: the precautionary principle does not allow for the risk of doing nothing. For example, the side-effects of a vaccine may not be understood well enough to justify its use, but if it is not employed the disease remains a threat.
Another approach is to ‘ask the experts’. Again, though, this is not a perfect solution: if the science is inherently uncertain and there is little evidence to call on, then what use is a scientist’s gut feeling? Worse still, the public will be angry if the scientists prove to be wrong. Italian scientists are currently on trial, charged with manslaughter, for failing to communicate the risk before the 2009 L’Aquila earthquake. If scientists are asked to make pronouncements in cases where evidence is scant and the cost of getting it wrong is so high, then they are unlikely to be forthcoming.

A third possibility is to build so-called ‘resilient systems’. A resilient system can maintain operations despite suffering from unpredictable faults. The internet is a good example: computer scientists have been pretty adept at constructing a system that does not break too often and is readily fixed. But how does one go about developing resilience in natural systems—how can we protect against unpredictable outbreaks of disease?

The only real way to proceed is with humility, transparency and through open discussion; but admitting that you simply do not know is rarely politically simple. The way in which risk and uncertainty is perceived and communicated is therefore vital. Managing the public’s fears is, in many senses, as important as tackling the disaster in hand.

As Niels Bohr is alleged to have said, “Prediction is very difficult, especially about the future.” The rather frustrating challenge that remains for policy-makers is what to do about it.

Tim Middleton is a 4th year undergraduate in the Department of Earth Sciences