Wednesday 13 February 2013

Evidence-based policy and the problem of non-knowledge

Following on from my post a few weeks back on the curious British-ness of the evidence-based policy debate, it occurred to me that recent work in Science and Technology Studies (STS) and the sociology of knowledge might be able to give useful insights on the challenges for those advocating evidence-based policy. The value-ladenness and political nature of policy decisions is usually acknowledged by many of those calling for the more systematic use of evidence in policy, often stating that they accept that ultimate policy decisions are rarely made on the basis of evidence alone. The disagreement of different experts and forms of evidence is also a commonly acknowledged problem with the process. What is often left out of the discussion are the problems associated with ambiguity, ignorance and even non-knowledge, as well as the obstinate impossibility of accurately predicting the future.

Implicit in discussions of evidence-based policy is the notion that increasing the amount and robustness (however this is defined) of evidence available to policy-makers leads to better policy decisions with greater certainty of 'what works'. The aim of this post is not to undermine efforts to encourage policy-makers to use the evidence available in their policy decisions and to monitor the effects of their decisions (for example through Randomised Controlled Trials); rather it is to explore instances and conditions under which accruing more and better evidence might not be sufficient or even necessary for the formation of credible, legitimate and workable policies.

Common sense reasoning might lead us to assume that as we gather and construct more knowledge about the world around us, both through organised academic efforts and other forms of knowledge-making, we are gradually reducing the pool of 'stuff' about which we are ignorant and increasing the level of certainty with which we can make decisions in all areas of policy. One obvious caveat to this is that the designation of which knowledge is considered to be robust and relevant to a policy decision has its own politics and can be continually contested; furthermore, other commentators have drawn attention to the difficulty of providing evidence which is policy-relevant but doesn't stray over the boundary into being policy-prescriptive. But a more radical challenge to this logic comes from the STS scholar Matthias Gross, who argues that the rapid acceleration of knowledge accumulation in the 'knowledge society' has paradoxically constructed more areas of ignorance and made 'surprises' increasingly frequent. This occurs because all new discoveries and new knowledge simultaneously open up new questions and present new problems. Ulrich Beck has recently made a similar argument, stating that ignorance or non-knowledge cannot be overcome through the accumulation of more knowledge and the practicing of better science, rather it is a product of more and better science. This phenomena has arguably been observed in the case of emerging technologies, from GMOs to nanotechnology or nuclear power. Another example of this is the emergence of new data and discoveries related to the disintegration of the Western Antarctic Ice Sheet between the 3rd and 4th IPCC assessment reports (more info here), which increased the uncertainty about the report's projections for sea-level rise and became a source of great disagreement between scientists where previously there had been broad agreement. In other words this new scientific knowledge raised new uncertainties and created new areas of ignorance.

This 2008 book is an excellent summary
of the emerging field
'Agnotology' is one heading under which new studies of ignorance are emerging, underlined by the understanding that ignorance cannot just be considered as an absence of knowledge which needs to be corrected, but rather that it is something which is often actively constructed and sustained by key social institutions - for example, trust and the need for privacy. Ignorance might also be constructed through the need for researchers to specialise in particular areas over other ones and to close down the problems being posed in order to make them more easily answerable and addressable. This new body of work points out that by designating a group as ignorant we are effectively making a claim to knowledge of a particular object or phenomenon; thus we cannot even assume that all groups will agree on what constitutes knowledge and what constitutes ignorance in any given context. Related to this is the oft-repeated phrase: 'there are known unknowns and unknown unknowns', drawing attention to the potential multitude of instances in which we are entirely unaware of our own ignorance. So whilst in some cases we might be able to 'correct' ignorance and non-knowledge through further inquiry and knowledge-making, there are likely to be cases where ignorance continues to be irreducible - either because we are unaware of it or because it is beyond the scope of our methods and approaches.

So if ignorance, uncertainty and non-knowledge are proliferating in the context of continual scientific and technological endeavour, and if some of these factors are irreducible in the foreseeable future, how can we nonetheless attempt to make credible policy decisions which are held to be legitimate by the actors involved and affected and which have a good chance of avoiding harm to citizens and the environment? This challenge is especially perplexing in a political culture like the UK, so grounded in the desire to justify policies with empirical detail and the judgement of experienced experts. Furthermore, STS scholars have pointed out that exaggerating the level of certainty in a particular policy pronouncement on the grounds of bolstering political legitimacy, only to be faced with a surprise which undermines the original policy decision - BSE and the safety of nuclear power are relevant examples here - will only serve to further erode the legitimacy of a particular institution, department or advisory body.

These difficulties are compounded in the case of policies which rely on uncertain projections, for example, of the future demographic mix of the population, the effects of climate change in different environments or the energy needs and preferences of future people. Furthermore, such policy decisions will be intimately involved in constructing this uncertain future. One prominent policy mechanism for dealing with ignorance and uncertainty has been the precautionary principle, initially developed in a European context to better regulate chemicals in a context where there had previously been a need for affirmative proof of the harms caused by a new chemical before sanctions or regulations could be put in place. But for Matthias Gross the precautionary principle doesn't go far enough - it offers a prescription of what should not be done but gives policy makers little indication of what should be done. Furthermore, the precautionary principle fails to accept the persistence of ignorance and uncertainty in some policy contexts. Perhaps more promising approaches include those which emphasise the building of capacities for resilience, which could both reduce the damage caused by future surprises but also improve institutional responsiveness. Similarly the 'reflexive modernisation' advocated by Ulrich Beck, Scott Lash and others also sanctions an institutionalised acceptance of uncertainty, ignorance, and the constant presence of diverse risks, implying a need for constant recursive reflection on the limits of knowledge but also reflex responses to the effects of modernisation.


Matthias Gross offers what is probably the most pragmatic way forward for policy making in conditions of non-knowledge, advocating a state of constant experimentation between natural and social systems in the implementation of policies. What he suggests is broader, but not entirely unrelated, to what is called for by prominent actors like Ben Goldacre and Mark Henderson (see previous post). What is perhaps different is his embracing of radical ignorance and uncertainty, underlining the limitations of expert knowledge (this argument has interesting parallels with the economist Tim Harford's book 'ADAPT'). What Gross evokes with the notion of experimentation is not the control and certainty of the laboratory, but rather repeated attempts to deal with and learn from surprising events, which help to give us an awareness of our own ignorance (see more here). Under an understanding of policy making as experimentation, the certainty of evidence and 'the facts' is no longer a prerequisite for action. Rather policy making is understood as constant implementation and monitoring of efforts to deal with a particular problem, whilst being open and responsive to surprise events, new interpretations of the problem and emerging mobilisations of actors. This argument has great relevance to current climate policy debates, for example, with actors increasingly calling for robust, better-than-nothing climate adaptation efforts (e.g. the Hartwell Paper) instead of waiting for the level of certainty required for the implementation of 'optimal' solutions, which ultimately be impossible to achieve.


8 comments:

  1. Very interesting post. I believe that the title belies the potentially important role of non-knowledge in evidence-based policy however (as advocated by Gross), and therefore non-knowledge is no longer constructed as a 'problem' but rather as potentially part of the solution... Readers might enjoy more readings on ignorance at the Sociology of Ignorance website http://sociologyofignorance.com/

    ReplyDelete
    Replies
    1. Thanks for your insightful and constructive comment Joanne. I agree that the title could have been chosen better - it's difficult to get the balance between punchy, accurate and informative. Perhaps 'The role of non-knowledge in evidence-based policy making' would have been better. Thanks also for directing me to this very useful resource - the reading list is especially good to be aware of!

      Delete
  2. You know Rupert Read is working on a conditional precautionary principle with an emphasis on increasing resilience with Nassim Taleb atm http://www.fooledbyrandomness.com/precautionary.pdf ?

    ReplyDelete
    Replies
    1. No I didn't so thanks for the link David! Looks like a really interesting project.

      Delete
  3. Really interesting and thoughtful post. Thanks for sharing! I found it thanks to an #agnotology tag on Twitter.

    Now that God is dead (and we killed him) the youth need some moral absolutes with which to orient themselves in their world, and anything with "evidence-based" in the title seems to be fitting the bill, for a certain brand of middle class sceptical liberal who has not yet woken up to the fact that a "market" is as much an ideological construction as "a politburo." This, I suspect, explains the very British nature of the debate: with a lesser arts/science cleavage in their wissenshafts, yer continentals just aren't so narrowly ignorant.

    Paradoxically, it seems this is just about to get worse for a while, as the British scientific community fully embraces open access publishing while the humanities and social sciences continue to lock up the good stuff behind paywalls. This leaves abhorrent voids all over the noösphere which the internet is only too happy to ablate with stuff and nonsense.

    ReplyDelete
    Replies
    1. Thanks for your interesting comment Douglas - glad you enjoyed the post. It's very interesting that you link it to the current open access debate. I find it also worrying that the current push for 'gold' open access in the UK seems to favour the funding and publishing structures of the sciences over social sciences and humanities, though I accept that the problem is also due to a lack of motivation among academics.

      It is also interesting to follow this attachment to straight-forwardly evidence-based policy through the new 'what works' centres launched today by the UK government, based on the NICE model (http://www.independent.co.uk/news/uk/politics/government-to-try-crowd-sourcing-key-policies-to-see-what-works-8518515.html).

      Delete
  4. >the current push for 'gold'

    Boy-o-boy did the open access movement score an own goal when it adopted this green v. gold terminology. I'm joining Cameron Neylon in refusing to use it. Much better just to say what you mean, using words such as "institutional repository" or "online peer-reviewed journal," and consider funding separately. Conflating "gold" with "online journals based on an article processing charge business model" was unfortunate indeed—of course, many 'gold' journals run on shoestring budgets and charge no author fees, but for obvious reasons gold and eyewatering publication fees are conflated.

    I quite agree that one size does not fit all, and if the humanities and social sciences (HSS) want to make different use of the publishing possibilities offered by the invention of the web (and they do, see e.g. these remarks by historian Peter Mandler) then the world should listen. However, if HSS authors actually want to be read, cited, and influential then they will want to make their words freely available on the network.

    From my own medically influenced perspective, it seems to me that, 20 years after the invention of the web, anyone reporting research on human subjects who does not make it freely world-readable is acting in a way that is immoral, unethical, and unscientific.

    As for evidence-based policy, well, I suppose it is better late than never. It is obvious that if you take an action, you should then monitor the results of that action. The French Assemblée Nationale has long been conducting reviews of new laws after they've been in operation for a while, I believe. My hunch is that republics, having accepted their human fallibility, may be more likely to do this than monarchies.

    ReplyDelete
    Replies
    1. Yes, rather tripped up by my short-hand there - I certainly agree that the gold/green distinction can be very unhelpful.

      From an ethical standpoint I think there are also broader issues related to access to research - just because something is physically accessible we have no reason to assume that it will be accessed, read, understood or even seen as relevant by potential 'knowledge-users'. I think there is much more creativity and imagination needed in terms of academic communication and engagement with those outside of academia. My hunch is that all of the necessary functions cannot be carried out by academic papers as we know them today - whether they are open access or not.

      I agree with your comments on national differences in how 'evidence' is dealt with in policy - see this previous post http://thetopograph.blogspot.co.uk/2013/01/evidence-based-policy-very-british.html. I guess my worry is that rationalism merely becomes a new ideological position, and a triumphalist one at that, which has very little to do with accepting human fallibility and irreducible uncertainties - I haven't seen much discussion of this in the UK debate.

      Delete