Why 'real' policy impact is so difficult to evidence
In this post below, republished with permission, Professor Christina Boswell asks how we can tell what function research is playing in policy-making? It's a timely question ahead of tomorrow's 2014 Power to Persuade symposium.
Christina Boswell is Professor of Politics at the University of Edinburgh and writes on politics, knowledge and immigration at her blog, where this article was originally published.
Many of us recently went through the painful experience of trying to evidence the impact of research on policy, as part of the Research Excellence Framework 2014 process. One of the problems with this endeavour is that policy-makers are likely to be reticent about the influence of research precisely in cases where it has affected policy. Yes, I know that sounds counter-intuitive. Let me elaborate.
Most studies about the uses of research in policy-making focus on how far research is used to adjust policy. Indeed this instrumental, or problem-solving, model, dominates thinking about evidence-based policy, as well as the impact agenda. Back in the 1970s, Carole Weiss famously challenged this idea, suggesting that research often has a more subtle and gradual impact, through its ‘enlightenment’ function. But while the notion of enlightenment seemed to capture the influence of knowledge in many cases, it still stuck to the basic assumption that the value of research for policy lies in its capacity to improve government decision-making or performance.
This instrumental view overlooks the more symbolic ways in which knowledge can be a valuable resource for politicians. In previous work (Boswell 2009), I distinguished two such uses: legitimising and substantiating. Legitimising knowledge use is where policy-makers value research as a means of bolstering their credibility in taking sound, rational decisions. They can point to the fact that they commissioned research, or host a research unit, or carry out data analyses of policy problems. Substantiating knowledge use refers to the deployment of research to back up particular claims or preferences. Policy-makers can invoke – ideally independent – research findings to add weight to their claims.
My study of the political uses of research in the field of immigration policy suggested that much – probably most – research used by policy-makers in the UK, Germany and European Commission was valued for its substantiating or legitimising functions. That may not apply as widely in more technical policy areas, and those less prone to symbolic policy-making. But I suspect that much of the ‘impact’ laid claim to in REF case studies would fall into this symbolic category.
And now comes the paradox. How can we tell what function research is playing in policy-making? How can we distinguish between instrumental, legitimising and substantiating uses? I developed three indicators that might help us gauge the function played by knowledge. One of these was the extent to which governments publicised or drew attention to the research they cited, commissioned or carried out. Where research is valued for its legitimising function, we would expect policy-makers to be keen to publicise the existence of the study/research unit, highlighting the authority and independence of its authors. They would be less concerned about content; the point is to signal the credibility of their knowledge base. Where research is valued for its substantiating function, we would expect policy-makers to focus on substantive findings that support their claims. We might also expect them to be keen to demonstrate its robustness, especially in the face of scepticism from their opponents.
But when research is used instrumentally to adjust policy, policy-makers will be at best neutral about publicising it. The point of instrumental research use is that the research in question is considered a resource for improving policy outputs or performance. The political benefits accruing from these adjustments are related to how they impact the target of intervention – not what they signal about government competence or credibility. So policy-makers may see little point in publishing or referencing the underpinning research. Some organisations may even be reluctant to accredit pieces of research that had an influence on their thinking.
Moreover, if we accept Weiss’s point about enlightenment, governments may hardly be aware of how concepts or insights from research have gradually shifted their thinking. Yet it is often those sorts of processes of gradual diffusion that are the most likely to bring about radical shifts in framing policy problems.
The upshot is that real impact – as defined by REF – is going to be far more difficult to track than more symbolic forms of research utilisation. Governments will be keen to broadcast research that supports their arguments or bolsters their credibility. They will be far more reticent about findings and ideas that truly had an impact on policy.