The air in the trading room felt thick this morning, a usual Tuesday, but something felt different. Maybe it was the quiet, the analysts hunched over their screens, the slow hum of the cooling system. Or maybe it was just the news — the latest from Demis Hassabis, CEO of DeepMind, and his take on artificial intelligence.
Hassabis, as per reports, made a key distinction. Coding, math, and even games, he suggested, are more tractable for AI because the outputs are verifiable. Policy and public decision-making? Not so much. These areas, he argued, are subjective, difficult to evaluate consistently. A point that, frankly, seemed obvious, but worth repeating in the current climate.
The implications are, well, everywhere. Consider the recent tax law changes, for example. The Lilly Family School of Philanthropy at Indiana University, in their recent study, showed that giving patterns shifted immediately. High-net-worth donors reacted one way, smaller nonprofits another. Could an AI reliably predict, or even understand, the human calculus behind those decisions? Probably not, or maybe I’m misreading it.
The core of Hassabis’s argument seems to be about the nature of truth itself. In areas like coding, where there’s a right answer, AI can excel. But in the messy world of policy, where the best answer is often subjective, AI stumbles. It’s a point that echoes through the halls of the Urban-Brookings Tax Policy Center, where experts are constantly wrestling with the human element in economic modeling.
There’s a good reason for this. As the IRS guidance on charitable giving makes clear, incentives and behaviors are complex, and change fast. Take the drop in giving after the 2017 tax law changes — a 10-15% decrease, according to some estimates. Did the AI predict that? Did it understand the reasons why?
It’s not that AI is useless, far from it. It can crunch numbers, identify patterns, and even make predictions. But it can’t replace the human judgment required to make decisions where there is no single right answer, not yet. The market seems to agree, the caution evident in the slow trading.
And, still, the question lingers: what happens when we trust AI with decisions it isn’t ready for? The answer, as Hassabis suggests, is not entirely clear.