AIGM home  |  Our Community  |  About Us  | 
AIGM logo Australian Institute of Grants Management An OurCommunity.com.au enterprise


Forum    Login   Join

News

Whatever happened to evidence-based policy making?

Few in government deny that evidence-based policy-making is important to good outcomes, and Australia's history shows that to be the case. But the past decade has shown a marked decline in those ideals, according to Professor Gary Banks.

This is an abridged version of his address for the Alf Rattigan Lecture for the Australia and New Zealand School of Government (ANZSOG), in an engaging and entertaining presentation that points the way for what's worked in the past, and what can be done at both political and bureaucratic levels to avoid policy on the run.

Professor Banks spent many years at the Productivity Commission before retiring in 2012 as its long-serving Chair. He was also past CEO and Dean of ANZSOG, alongside a series of other senior roles in government, education and business.

What are we talking about?


"When I use a word", Humpty Dumpty said in a rather scornful tone, "It means just what I choose it to mean - neither more nor less". "The question is" said Alice, "whether you can make words mean so many different things". (Lewis Carroll)


One of the challenges in talking about EBPM (evidence-based policy making), which I had not fully appreciated last time, was that it means different things to different people, especially academics. As a result, disagreements, misunderstandings and controversies (or faux controversies) have abounded. And these may have contributed to the demise of the expression, if not the concept.

For example, some have interpreted the term EBPM so literally as to insist that the word "based" be replaced by "influenced", arguing that policy decisions are rarely based on evidence alone. That of course is true, but few using the term (myself included) would have thought otherwise. And I am sure no-one in an audience such as this, especially in our nation's capital, believes policy decisions could derive solely from evidence -- or even rational analysis!

If you'll pardon a quotation from my earlier address: "Values, interests, personalities, timing, circumstance and happenstance - in short, democracy - determine what actually happens" (EBPM: What is it? How do we get it?). Indeed it is precisely because of such multiple influences, that "evidence" has a potentially significant role to play.

So, adopting the position from Alice in Wonderland, I am inclined to stick with the term EBPM, which I choose to mean an approach to policy-making that makes systematic provision for evidence and analysis. Far from the deterministic straw man depicted in certain academic articles, it is an approach that seeks to achieve policy decisions that are better informed in a substantive sense, accepting that they will nevertheless ultimately be - and in a democracy need to be -- political in nature.

A second and more significant area of debate concerns the meaning and value of "evidence" itself. There are a number of strands involved.

Evidentiary elitism?

One relates to methodology, and can be likened to the differences between the thresholds for a finding of guilt under civil and criminal law ("balance of probabilities" versus "beyond reasonable doubt").

Some analysts have argued that, to be useful for policy, evidence must involve rigorous unbiased research techniques, the "gold standard" for which are "randomized control trials". The "randomistas", to use the term which headlines Andrew Leigh's new book (Leigh, 2018), claim that only such a methodology is able to truly tell us "what works"

However adopting this exacting standard from the medical research world would leave policy makers with an excellent tool of limited application. Its forte is testing a specific policy or program relative to business as usual, akin to drug tests involving a placebo for a control group. And there are some inspiring examples of insights gained. But for many areas of public policy the technique is not practicable. Even where it is, it requires that a case has to some extent already been made. And while it can identify the extent to which a particular program "works", it is less useful for understanding why, or whether something else might work even better.

That is not to say that any evidence will do. Setting the quality bar too low is the bigger problem in practice and the notion of a hierarchy of methodologies is helpful. However, no such analytical tools are self-sufficient for policy-making purposes and in my view are best thought of as components of a "cost benefit framework" - one that enables comparisons of different options, employing those estimation techniques that are most fit for purpose. Though challenging to populate fully with monetized data, CBA provides a coherent conceptual basis for assessing the net social impacts of different policy choices - which is what EBPM must aspire to as its contribution to (political) policy decisions.

Evidence ain't evidence


"Everyone is entitled to his own opinion, but not to his own facts." (Daniel Patrick Moynihan)


A more fundamental issue is that evidence itself is generally not immutable, particularly when moving beyond raw data to analysis and interpretation.

Take gambling regulation. As the Productivity Commission has argued, a balanced policy approach would seek to minimize harms to "problem gamblers" without unduly affecting the enjoyment of recreational gamblers. But getting accurate data on how much time or money people spend on it is hard (with the ABS Household Expenditure Survey showing much smaller numbers than are consistent with industry revenue!) let alone the consequences of spending "too much". And, as the Commission found in its reviews, what constitutes "too much" is deeply contested. Assumptions about values and behaviour that are integral to estimation can differ, and consequently the Commission's own estimates of the social costs and benefits ranged widely. (PC 1999, 2010) Similar issues can arise in other policy areas, particularly in the social and environmental domains.

JFK"We subject all facts to a prefabricated set of interpretations." (John F Kennedy)

So what counts as "evidence" to some need not be acceptable to others, even when methodologically sound. And this of course affects its credibility in a policy sense.

Misuse and abuse

In many cases, however, the evidence will not be "sound". It has become common in policy advocacy for data to be concocted, cherry-picked or manipulated to suit a predetermined position. Such "policy based evidence" - a term that may have been coined in jest but is seriously apposite -- has a long pedigree and even a textbook (Huff's How to lie with Statistics) to support it!

A topical example is the political debate about rising "inequality" in our society, in which selected indicators have been used to draw conclusions unsupportable by the weight of evidence. As the PC observed in its recent research report on this topic, this can lead to policy approaches that are misdirected and ultimately ineffectual in terms of their own objectives. (PC 2018) For example, focusing on the share of income going to the top 1-5 per cent of income earners, may suggest that in the cause of greater "equality" financially successful members of society need to be taxed even more, when what is really needed, according to the Commission, are policies to enhance the living standards and earning potential of those at the bottom. Punitive tax rates at the top end can actually make this harder to achieve.

For some time, economic modeling has been one of the instruments of choice for policy-based evidence, which unfortunately has tended to undermine the public credibility of modeling more generally. Quantitative models have the advantage of opacity combined with an ability to make different "design" and data choices that can shift the results in desired directions. For example, the modeling in support of schemes proposed to overcome the electricity policy "trilemma" associated with reducing carbon emissions, has raised more questions than it has answered, particularly about the basis for projected electricity price falls.

One's own 'facts'

That evidence is so often misused in policy debates may tell us something about how people respond to evidence itself. Increasingly, evidence is judged not on its merits, but by who is using it and for what purpose.

Many have remarked on the increasingly "tribal" nature of our society. In such a world, people are increasingly skeptical about information associated with the "other camp" -- and, I might add, increasingly gullible about any produced by their own. The inequality debate is again a case in point. But so too is climate change, compounded by the fact that most people (lawmakers included) understand neither the science of the "greenhouse effect" nor the economics of different policy responses.

At the extreme, many simply choose to ignore or disregard any evidence or analysis that runs counter to their own views -- views formed on the basis of sentiment, values or ideology. This has no doubt to some extent always been so. But while it was traditionally confined mainly to religious topics or the less educated, we now observe it happening more widely, even at our universities. A recent instance was the attempt to "de-platform" noted Australian psychologist and author Bettina Arndt (daughter of the late Professor Heinz Arndt here at the ANU), because of her dissenting interpretation of AHRC survey data on the prevalence of sexual assaults on campus.

It's what you do with it


Darryl: This is beautiful, darling! What do you call these things again? Sal: Rissoles. Everyone makes rissoles, darl. Darryl: Yeah, but it's what you do with them. (The Castle.)


In short, evidence-based policy making faces the challenge that this thing we call "evidence" is rarely the uncontested and objective policy resource that we might imagine it to be. Rather, it can be a battleground of conflicting views, assumptions and interpretations. And therefore the notion that "evidence" should win the day in its own right, appealing as it may be to the research and evaluation community, is fanciful.

That is not to say that evidence cannot be influential in policy decisions - far from it. But it does mean that how (and by whom) it is generated, discussed, tested and utilized matters greatly. To borrow from The Castle, it's not the evidence, "it's what you do with it".

The processes by which policy decisions are informed and made effectively determine what role evidence has to play and how well it plays it. Processes may vary according to the issue at hand and its timing. They reflect institutional capabilities and above all the attitudes and capacity of government leadership -- primarily at the political level, but bureaucratic leadership too, a point to which I will return.

At a general level, we could define a "good" policy-making process as one that informs and engenders support for political decisions by promoting an understanding of

  • the causes and nature of a policy problem or "issue"
  • the relative merits and trade-offs in different options for dealing with it, and
  • whether the option ultimately chosen turns out as intended.

Clearly to achieve these things, there needs to be a central role for the production of evidence, but also for consultation, deliberation and explanation.

This is an abridged version of Prof Bank's full address. Read it in full below.

MORE INFORMATION

READ IN FULL: The full address on the ANZSOG website | PDF download | Past addresses

ONLINE HELP: More of our newsletters, tools and articles about social measurement

News
All the news grantmakers need

Here's a taste of the 'target your funds' edition, with insights from around the world on core funding, deadline data, 'buying' impact and measuring it.

News
SmartyStats reveal how applicants are lodging forms

Analysis of data on applicants using the SmartyGrants system reveals that more than half of all grantseekers who lodge applications complete them in under 48 hours.

News
Insights on core funding from UK foundation

When it comes to granting core funding, the Esmée Fairbairn Foundation stands out from the pack, from 2013 to 2018 distributing more core funding than any other organisation in the UK. Here's why.

News
US grant organisation's push to boost grants management professionals

If you don't know about PEAK Grantmaking's Grants Management Professional Competency Model, you probably should.