Implicit in discussions of evidence-based policy is the notion that increasing the amount and robustness (however this is defined) of evidence available to policy-makers leads to better policy decisions with greater certainty of 'what works'. The aim of this post is not to undermine efforts to encourage policy-makers to use the evidence available in their policy decisions and to monitor the effects of their decisions (for example through Randomised Controlled Trials); rather it is to explore instances and conditions under which accruing more and better evidence might not be sufficient or even necessary for the formation of credible, legitimate and workable policies.
Common sense reasoning might lead us to assume that as we gather and construct more knowledge about the world around us, both through organised academic efforts and other forms of knowledge-making, we are gradually reducing the pool of 'stuff' about which we are ignorant and increasing the level of certainty with which we can make decisions in all areas of policy. One obvious caveat to this is that the designation of which knowledge is considered to be robust and relevant to a policy decision has its own politics and can be continually contested; furthermore, other commentators have drawn attention to the difficulty of providing evidence which is policy-relevant but doesn't stray over the boundary into being policy-prescriptive. But a more radical challenge to this logic comes from the STS scholar Matthias Gross, who argues that the rapid acceleration of knowledge accumulation in the 'knowledge society' has paradoxically constructed more areas of ignorance and made 'surprises' increasingly frequent. This occurs because all new discoveries and new knowledge simultaneously open up new questions and present new problems. Ulrich Beck has recently made a similar argument, stating that ignorance or non-knowledge cannot be overcome through the accumulation of more knowledge and the practicing of better science, rather it is a product of more and better science. This phenomena has arguably been observed in the case of emerging technologies, from GMOs to nanotechnology or nuclear power. Another example of this is the emergence of new data and discoveries related to the disintegration of the Western Antarctic Ice Sheet between the 3rd and 4th IPCC assessment reports (more info here), which increased the uncertainty about the report's projections for sea-level rise and became a source of great disagreement between scientists where previously there had been broad agreement. In other words this new scientific knowledge raised new uncertainties and created new areas of ignorance.
|This 2008 book is an excellent summary |
of the emerging field
So if ignorance, uncertainty and non-knowledge are proliferating in the context of continual scientific and technological endeavour, and if some of these factors are irreducible in the foreseeable future, how can we nonetheless attempt to make credible policy decisions which are held to be legitimate by the actors involved and affected and which have a good chance of avoiding harm to citizens and the environment? This challenge is especially perplexing in a political culture like the UK, so grounded in the desire to justify policies with empirical detail and the judgement of experienced experts. Furthermore, STS scholars have pointed out that exaggerating the level of certainty in a particular policy pronouncement on the grounds of bolstering political legitimacy, only to be faced with a surprise which undermines the original policy decision - BSE and the safety of nuclear power are relevant examples here - will only serve to further erode the legitimacy of a particular institution, department or advisory body.
These difficulties are compounded in the case of policies which rely on uncertain projections, for example, of the future demographic mix of the population, the effects of climate change in different environments or the energy needs and preferences of future people. Furthermore, such policy decisions will be intimately involved in constructing this uncertain future. One prominent policy mechanism for dealing with ignorance and uncertainty has been the precautionary principle, initially developed in a European context to better regulate chemicals in a context where there had previously been a need for affirmative proof of the harms caused by a new chemical before sanctions or regulations could be put in place. But for Matthias Gross the precautionary principle doesn't go far enough - it offers a prescription of what should not be done but gives policy makers little indication of what should be done. Furthermore, the precautionary principle fails to accept the persistence of ignorance and uncertainty in some policy contexts. Perhaps more promising approaches include those which emphasise the building of capacities for resilience, which could both reduce the damage caused by future surprises but also improve institutional responsiveness. Similarly the 'reflexive modernisation' advocated by Ulrich Beck, Scott Lash and others also sanctions an institutionalised acceptance of uncertainty, ignorance, and the constant presence of diverse risks, implying a need for constant recursive reflection on the limits of knowledge but also reflex responses to the effects of modernisation.
Matthias Gross offers what is probably the most pragmatic way forward for policy making in conditions of non-knowledge, advocating a state of constant experimentation between natural and social systems in the implementation of policies. What he suggests is broader, but not entirely unrelated, to what is called for by prominent actors like Ben Goldacre and Mark Henderson (see previous post). What is perhaps different is his embracing of radical ignorance and uncertainty, underlining the limitations of expert knowledge (this argument has interesting parallels with the economist Tim Harford's book 'ADAPT'). What Gross evokes with the notion of experimentation is not the control and certainty of the laboratory, but rather repeated attempts to deal with and learn from surprising events, which help to give us an awareness of our own ignorance (see more here). Under an understanding of policy making as experimentation, the certainty of evidence and 'the facts' is no longer a prerequisite for action. Rather policy making is understood as constant implementation and monitoring of efforts to deal with a particular problem, whilst being open and responsive to surprise events, new interpretations of the problem and emerging mobilisations of actors. This argument has great relevance to current climate policy debates, for example, with actors increasingly calling for robust, better-than-nothing climate adaptation efforts (e.g. the Hartwell Paper) instead of waiting for the level of certainty required for the implementation of 'optimal' solutions, which ultimately be impossible to achieve.