We, as human beings, have a problem with uncertainty. Not just basketball fans, not just sports fans, but just about all of us, almost all of the time. We might pay lip service to percentage chances, but deep down, we’re more Han Solo and C3PO: never tell us the odds.
Nate Silver knows a thing or two about this. As the 2012 election drew near this past fall and Barack Obama’s victory looked more and more sure, people wanted him to say that Obama having an 80% chance to win meant Obama was going to win. But he kept insisting that it didn’t: 2 out of 10 times he was going to lose. People hated that.
Silver was on a panel this afternoon at the MIT Sloan Sports Analytics Conference moderated by Houston Rockets GM Daryl Morey with Benjamin Alamar, Professor of Sport Management at Menlo College; Jeff Ma, CEO of tenXer and author of Bringing Down the House; Phil Birnbaum, editor of sabermetric publication “By the Numbers”; and Alec Scheiner, President of the Cleveland Browns. The title (“True Performance & the Science of Randomness) was one—like many of the panel titles this weekend—implied some kind of arrow towards or at least waypoint on the way to true understanding, but of course, the real thing was messier.
What was surprising was how much this was the actual point. It can be seductive to see number lined up neatly on a page, or converted sexily into a spray of points on a graph, and feel like you have a grip on the truth. When you encounter pushback from people invested in their strongly held beliefs about the eye-test, about their guts, about rings, it can be easy to get sucked into the kind of language they want to use.
At one point late in the panel, I believe it was Ma who said that when you argue with certainty, there’s the temptation to be certain, even if it’s not in your model. All the panelists had talked about how the way you had to look at assessing players involved probabilities: the analysis could be sound, grounded, built on a solid foundation, and yet still not be able to guarantee anything. And that’s not the fault of data, but the fault of a world where things are changing constantly.
Ma went on to say that something as basic as a small rule change can dramatically affect the models you’ve designed. For example, any model designed before the change in hand-checking rules in the NBA wouldn’t give you the same quality of data used now. It has nothing to do inherently with the model, but rather with things the model couldn’t have accounted for when it was created. The problem comes when this is viewed as a weakness of data.
In essence, the discussion at this point had veered into territory that’s normally the province of religion versus science. In its weak state as belief or in the stronger state of faith, religion is a way for us to deal with uncertainty by taking it out of our hands. When people doubt science, they often give the reason that science can’t provide all the answers, but this misapprehends it. Science isn’t just providing answers; it’s at least as much about creating questions.
What Ma’s point about arguing with certainty gets at is that there’s nothing so simple as truth about numbers. There’s an art in the way you frame them, in the way you tell their story so that your audience buys into them. It’s a mistake to try to make science do the work of religion, and it’s a mistake to fall prey to the language of absolute certainty in talking about analytics.
Ma predicted that the biggest move in the next ten years will be people who hate numbers getting involved in stats. If that’s going to be the case, it will be because they prove themselves on their own merits, not on the merits of the bloodier realm of guts and belief.