Two Books on Prediction
- Reviewed by Josh Trapani
- December 3, 2012
Nate Silver’s The Signal and the Noise and Sasha Issenberg’s The Victory Lab apply Moneyball-style predictions to a variety of fields.
The Victory Lab: The Secret Science of Winning Campaigns by Sasha Issenberg
The Signal and the Noise: Why So Many Predictions Fail - But Some Don’t by Nate Silver
Reviewed by Josh Trapani
When a flyer for one of Maryland’s candidates for Senate arrived in my mailbox a few weeks before the election, it got much more attention than the usual cursory glance and toss in the trash. I scrutinized the oversized glossy postcard carefully: Who exactly was the sender? Was it addressed to me personally? What issues did it mention? Did it specifically urge me to vote or was it merely a plea for support? Most fundamentally, what I wanted to know was: why was I selected to receive this particular piece of mail?
If that sounds self-aggrandizing or possibly paranoid, it’s not. I was in the midst of reading The Victory Lab by Sasha Issenberg, which provides good reason for wondering such things. I’ve seen a number of reviews characterize this book as “Moneyball for politics,” and the description is apt. Moneyball, by Michael Lewis, is about using analytics to choose the best players and improve the performance of a major league baseball team, and The Victory Lab looks at how political campaigns have used experiments and predictive models to identify voters who might be persuaded to support their candidate and maximize voter turnout. Both Moneyball and The Victory Lab are narrative-driven expositions that focus on personalities rather than on nitty-gritty analytical details. Both describe environments where the underlying challenge is expending resources in the most efficient way possible. And both tell the tale of folkloric, intuition-driven decision-making as it cedes ground to testable, data-driven decision-making. (Though in a way The Victory Lab is two stories, since Democratic and Republican campaigns did not quite evolve in tandem in their use of predictive analytics.)
To draw this inter-book comparison so deeply does not take anything away from Issenberg. Indeed, understanding how political campaigns operate is more important to us and to our democracy than knowing how baseball teams — or other businesses — make decisions. Issenberg, who writes the Victory Lab column for Slate, brings to the surface the machinations that underlie the headlines and attack ads most members of the public see — whether they like it or not — day in and day out during a campaign. One of my few criticisms, in fact, is that it makes so little connection between these levels of activity within a campaign that, after reading the book, I don’t have much of an understanding of the interactions between the two.
Very much like retailers, people who run political campaigns are interested in knowing more about you. They especially want to know how likely you are to: 1) support their candidate, and 2) vote, with these two attributes not necessarily correlated. In a perfect world (well, theirs’ at least) they’d be able to talk with you individually to find out. Since they can’t, they will mine every available data source for clues: census and other demographics, your voting records, donation history, purchases, organizational memberships or religious affiliations ... anything they can get their hands on. Then they’ll throw all of this information into algorithms to predict how likely you are to possess these two key attributes.
So let’s say you are a 30-year-old African American woman from Manhattan, registered as a Democrat, with no known religious affiliation and a history of donating to Planned Parenthood. Chances are you won’t be hearing much from Republican presidential candidates. Their algorithm will score you as beyond persuasion and not worth an investment of their resources. (Of course there is a chance the model is wrong and you are undecided, or even a raging Republican who simply hasn’t updated her registration. I’d like to say the models are extremely unlikely to be wrong, but counter-examples abound. To take just one: my wife, a relatively young Latina who donated enough to the Obama campaign to earn an ambassadorship to a small European nation, has received Romney mailings at her parents’ house in Michigan where she has not lived for six years.)
Chances are you won’t be hearing much from Democratic presidential candidates, either, at least not to ask for support — they’ll score you as firmly on board — though they may call or send you mailings to ensure you actually go to the polls.
But what if you live in a swing state and your demographics and other affiliations don’t send such a clear signal? You may expect to hear more from the campaigns, and the messages they send you and the issues they touch upon won’t be plucked from the air: they’ll be things that have resonated — in polls and focus groups — with people similar to you in demographic characteristics, geography and taste.
All this is a bit of a simplification, but it makes the point that campaigns these days use predictive analytics to try to achieve a high degree of individual precision in their messaging. And they use repeated experiments to optimize and hone those messages — anything from the issues mentioned to the type of contact to the aesthetics of a piece of mail — for particular audiences. Issenberg shows how this kind of activity has pervaded campaigns, and how the lines have repeatedly blurred between academic political science experiments and the actual business of campaigns. While political campaigns and the likes of Target and Bed, Bath & Beyond are not commercial in exactly the same way, audience “microtargeting” tactics are similar and The Victory Lab would have benefited from including more information on what retailers — who must surely be at the forefront of such efforts — do, especially given some of the privacy concerns these tactics may raise (such as, for example, identifying a teen girl as pregnant before her father found out).
Nate Silver, who runs the FiveThirtyEight.com blog, and who became a household name in the waning days of the 2012 presidential campaign, makes a brief cameo in The Victory Lab. It may surprise some readers that the aspect of Issenberg’s book that would likely be of most interest to Silver is the wonky details of improving the predictive ability of the campaigns’ algorithms. Silver’s book, The Signal and the Noise, is all about prediction and only a little bit about political campaigns. It’s a more imposing book than The Victory Lab — significantly longer, printed in a smaller font, and chock full of charts and graphs — and, perhaps ironically given Silver’s own Moneyball-esque background, bears much less resemblance to that book than Issenberg’s does.
For one thing, rather than a personality-driven narrative, The Signal and the Noise is an investigation. Silver wants to know what makes for good and bad predictions. To get to an answer, he surveys fields as diverse as meteorology, economics and geophysics. Why, for instance, have weather predictions improved over the last several decades, while earthquake predictions have not? Why are the National Weather Service’s weather predictions consistently (and, yes, predictably) better than those on your local newscast? Everything from the nature of different kinds of data sets to the characteristics and motivations of the predictors themselves comes under consideration.
Though The Signal and the Noise isn’t personality-driven, Silver’s background and experience determines the flow, which is mostly a good thing. This is not a book by a journalist looking at the “wacky world of numbers” or some such “check out the geeks” outsider formulation. Silver is a statistician with an extremely analytical mind. He is not afraid to talk with anyone, be it chess masters, academic experts, or even Donald Rumsfeld (about “unknown unknowns” — a quote for which I’ve always thought the former secretary of defense was unfairly maligned). And he is not afraid to get down and dirty with the data.
One theme of the book is Silver’s espousal of a Bayesian view of probabilistic reasoning. There’s nothing revolutionary about this — Bayes’ theorem is centuries old — even though traditional statistics rely on a different formulation. Nonetheless, Bayesian reasoning is powerful and intuitive, and — with Silver as a spokesman — will be introduced to many for the first time. When Silver recently got in trouble with the New York Times for making a wager with Joe Scarborough on some of his predictions, he was simply being a good Bayesian. But much of his fame, and his notoriety to some, comes from comparing what he does with what political prognosticators do. It’s not a fair comparison. The book unpacks all this, anticipating much late-election commentary.
I was especially interested in the sections of the book where Silver probes probabilistic reasoning and prediction in the context of games. He investigates both chess (with an emphasis on the matches between chess master Garry Kasparov and the IBM supercomputer Deep Blue) and poker (Silver himself made hundreds of thousands of dollars as a semi-professional poker player). The best part here is the slow and systematic nature of the exposition. Silver takes the time and space, with ample illustrations, to walk step-by-step through illustrative situations.
In his review of The Signal and the Noise for the New York Times, Noam Scheiber perceptively points to a potential downside of Silver’s approach. It’s easy for a mathematically-gifted person like Silver to wade into data and draw potentially insightful conclusions. But everyone who has developed expertise on a particular issue with rich data sets knows that there are often idiosyncrasies and nuances (say, in data quality over time) that must be considered — often in a qualitative manner — or the results may be clean, beautiful and totally wrong. (In my day job, I frequently analyze information from some of the federal statistical agencies to support advocacy efforts, so to this I say: welcome to my world.) For instance, Silver tests some of the predictions about climate change contained in the 1990 First Assessment Report of the U.N. Intergovernmental Panel on Climate Change (IPCC). Sure, he’s got the quantitative skills, but is he really qualified to do that?
I think that even if such an analysis wouldn’t make it into a top peer-reviewed scientific journal, it’s worth doing. One of Silver’s arguments is that testing predictions is a vital form of accountability. A two-fold caution is warranted, however. The first is simply: garbage in, garbage out. The second is that we ought to submit tests of predictions to the same scrutiny received by the predictions themselves.
Silver shows himself a careful analyst and I don’t think he would disagree with any of this. I agree with Scheiber that more Nate Silver (and more Nate Silver-style analysis done by equally responsible people) in public discourse can only be a good thing. We just have to remember not to be dazzled by numbers and charts. The Bayesian machines that are our brains need to perform rigorous analyses of their own before we accept their conclusions.
Josh Trapani is the Independent’s managing editor.