Many, many industrial jobs are falling to automation. In many areas, process workers are an endangered species. The movement towards driverless cars, for example, is today only picking off jobs on the fringes (take mining trucks, which are now self-directing) but is eventually going to shake out hundreds of thousands of jobs.
Up till now, the economy has coped surprisingly well. As machines take over in some fields, jobs have been created in areas that are harder to automate, jobs that require judgement, common sense, and human sympathy - jobs like grantmaking.
Any grantmaker who on that basis congratulates themselves on their job security, though, needs to take a long look at the most important term in that last sentence. Which is not, as it happens, "judgement" or "common sense" or "sympathy". It's "harder".
We've all seen the slogan "The difficult we do immediately. The impossible takes a little longer." For us humans, that's a chirpy and irritating witticism. For artificial intelligence (AI), it's a mission statement. And the important word in that sentence is "little".
Because - well, let's look at how the grantmaking field works now. It's a large field, with little regularity and little standardisation. There's a range of working methods and a spectrum of operations.
At one end, there's the old-fashioned gut feel method. You read all the applications, which are basically rambling letters from almost anyone saying why they think they deserve some money. You weigh up an uncountable multitude of factors and pick a few decision points from a buzzing, teeming multitude of possible influences. You pass out money in highly flexible - and thus highly personal - ways to recipients who do really, really good things, but if you're ever asked to explain exactly why Paul got the grant and Peter didn't, you're going to have to fall back on case-by-case reasoning and a fair dose of intangibles.
At the other end of the spectrum you have the timid bureaucrat, someone who doesn't want to take personal responsibility for any decision, and thus demands that the process be fed through a series of sieves.
The criteria for applications are as far as possible numeric and unambiguous, drawing thick black lines between cases that look to the outside observer almost indistinguishable. There's no room for appeal and no sympathy for bad luck.
Selection, too, involves numerical ranking: points are allocated to each specified element of the task's demands, and these are then summed to produce a clear list. A losing applicant can't argue the toss, any more than a golfer can complain that the scorer should have given less attention to the number of their strokes and more attention to their environmental soundness in replacing their divots.
Sketching this spectrum, incidentally, doesn't imply any criticism of either approach. There are many possible goals to pursue in the grants process, and some of them - flexibility, say, and predictability - are incompatible.
Afterwards, too, there are two approaches to impact assessment. The qualitative approach seeks to understand the outcomes, expressing that understanding in broad principles; quantitative approaches track measurable numbers and concentrate on seeing whether prior promises have been met or missed.
At the moment, numerical approaches are largely bluffing, because the core data (income gains, health measurements, educational progress) have to be based on what people tell you rather than on checking the relevant databases, but this is changing; researchers are already going to big data for answers.
Numbers in, numbers out: that's an algorithm. Anything that can be expressed in an algorithm can be delegated to a computer. A grantmaking program can not only give out funds to any not-for-profit that scores best on the algorithm, it can modify that algorithm - by looking at which criteria are most significant in producing change, or by adding new ones that tweak behaviour to the desired norm.
Given 50 years (say) of whole-of-government data, it can identify the variables that correlate with resilience, self-improvement, and mental health, and it can then exert pressure on those variables in real time in a vast feedback loop.
And don't think that avoiding numbers is going to save you, either. That's what doctors thought - that there was an intangible and unmechanisable essence of doctor, an instinct developed looking at x-rays of melanomas for 40 years that couldn't be set out in words, still less figures - knowledge that was tacit, not explicit; the difference between learning what a bicycle is and learning how to ride a bicycle. It turns out, however, that a robot can learn to ride a bicycle, and a robot can diagnose melanomas better than a clinician - first better than a novice clinician, and then better than any clinician.
The machine looks at what you look at, and recognises patterns, and applies them. Either your work is entirely random, or you're doing things for a reason; and if that's the case, AI can eventually work out what that reason is. We'd like to think - wouldn't we? - that we're doing our grantmaking for a reason. We're in the gunsights.
As with any prophecy, the bit that's doing all the work is the date. When, exactly, do you have to schedule your unemployment? Not in 10 years, possibly in 20, probably in 30. Give or take.
Grantmaking does, after all, have its protections.
One is that while outcomes measurement is a hotly contested field, the most obvious measure - client satisfaction - doesn't work at all for grantmaking. Those who get the grants will be very satisfied; those who don't will not be. As Louis XIV said, "If I grant one office, I make ninety-nine men unsatisfied and one ungrateful."
This gives you a grantee approval rate of just 1% if you do everything right, and if you do it all wrong.
So we can prove mathematically that we can't do better, and we can't do worse. Beat that, Deep Thought!
MORE INFOAIGM tools: Assessing applications | Deciding who gets the grant | More tools
The 2018 Grantmaking in Australia Conference will connect you with the people who have stared down the challenges and give you the confidence you need to take one small step for your organisation, towards one giant leap for humankind.
Prepare for impact: Suit up, strap and count down for better grantmaking. Save the date for our conference now.
There are some big players in the world of social impact reporting, with think tanks and technology companies vying to solve one of the toughest problems we face: how can organisations measure their social worth?
This is a new data-driven era, and if you claim that your grants programs have had an impact, you have to back those claims with data. Anecdotes aren't good enough.
How our SmartyGrants trainer can "save your (grantmaking) life".