AIGM home  |  Our Community  |  About Us  | 
AIGM logo Australian Institute of Grants Management An enterprise

Forum    Login   Join

The Seven Deadly Sins of Program Evaluation

Are you seeking grantmaking perfection? Of course you are.

And the best find a way to nirvana is to work out where you're going wrong. Which is why we've delved deep into our vaults to bring you ... "The Seven Deadly Sins of Program Evaluation". Step through all seven for perfection ...

Let's begin at the beginning, with what was, by some accounts, the first evaluation ever:

In the beginning, God created the Heaven and the Earth. And God said, "Let there be light". And there was light. And God saw the light, that it was good…

That's evaluation!

But it didn't stop there. God went on creating - dry land, grass, fish, cattle and so on. And she kept on evaluating as well, finding over and over again that what she was creating "was good".

There was even a program evaluation, which came out rather well - "And God saw everything that she had made, and, behold, it was very good."

The problem with this scenario is that that as an evaluation, it doesn't really measure up to today's best practice.

"The most important element in any evaluation - in any project - is knowing what you want."

Let's begin with what the project team did well. One, they were engaging in continuous evaluation - once a day, in fact, which is a pretty taxing schedule for formative evaluation. If it had turned out that, say, the Moon didn't work out according to specifications and had to be taken out of the program, they could have taken that into account straight away and would have been able to compensate by making the tides operate by some other mechanism, perhaps.

The second plus is that evaluation was taken seriously enough to have it considered at the highest level - by God herself, no less. Top management - really top management - was looking over the returns on a regular basis. That's about it for praise, though. After that, we have to start marking the evaluation down.

Perhaps the most obvious problem is that the feedback isn't very specific. There are only two grades - "good" and "very good" - and neither refers back to the original criteria, perhaps because there don't appear to have been any original criteria. Where are the KPI's? And while the speed of the project is impressive for such a large job, the follow-up is neglected. You really can't call a project "very good" after less than a week. Important impact measurements can't really be completed in that time period. Trends take longer to emerge.

In fact, the conclusions do seem to have been rather premature. With the benefit of hindsight, we know that years later the project funder decided to reboot the entire operation by wiping out nearly all the grass, cattle and people in a great flood, and building up the community again from a small group of professionals. That kind of cruel-to-be-kind cut-off decision can be necessary - I'm sure everybody here has had similar discussions in their grants committees - but it does cast some doubt on the optimistic sunniness of the original assessment.

All of which leads up to the next issue: dissemination and diffusion. Someone reading God's program evaluation would very little wiser about how any other funding body would go about creating another universe next time, if that became necessary. What learnings does one carry away, either for the community that was created or the overseeing body that was supporting that creativity?

We can say with some confidence that we've learned a lot about evaluation in the last 6000-odd years. The biggest problem in God's evaluation, though, is the absence of initial goals. How can one judge the success of the creator if we don't know what she was trying to achieve? The most important element in any evaluation - in any project - is knowing what you want.

It has been suggested, of course, that what the creator was after was a world full of people singing her praises. We've all known grantmakers who seemed to have that as their main goal, but these days we do tend to look for a little more from our resources. We want to change the world for the better. More than that, we want to learn how change the world for the better. We want to produce certain outcomes, make particular impacts, produce outcomes and impacts quicker and cheaper and more effectively next time. Along the way, though, there are many serious mistakes that can impede our progress.

We call these The Seven Deadly Sins of Program Evaluation.

Next: Avarice