The Victory Lab: The Secret Science of Winning Campaigns

“I wish God gave green noses to undecided voters, because between now and election eve, I’d work only the green noses. I wish God gave purple ears to nonvoters for my candidate on election eve, because on election day I’d work only the purple voters.”

Matt Reese in The Victory Lab

For the last several weeks, I’ve been combing through Sasha Issenberg’s The Victory Lab: The Secret Science of Winning Campaigns. Issenberg traces the recent history of integrating data into political campaigns. The book features both the conflict that data sometimes creates amongst the political establishment and the value it brings to understanding voters and potential voters alike.

The sheer size and analytical depth of the 2012 presidential campaigns are often referred to in mass media in broad terms. But Issenberg and Jonathan Atler, author of The Center Holds: Obama and His Enemies, provide a more detailed view of the analytical rigor of the Obama campaign. Atler details that to collect information on voters in contentious states, the campaign would place “4,000 to 9,000 phone calls a night. The calls, which eventually numbered nearly one million, sampled ten times as many voters a night as a standard pollster surveyed in a week.”[1] The Obama team had to have the technical capacity to store the results of these calls and the staff capacity to interpret the results into a strategy for allocating resources daily.

Beyond polling potential voters, the campaign also developed detailed profiles of their supporters and would-be supporters. Data collected by the Obama campaign could make promotional materials like television ad buys exceptionally targeted. Atler describes that the campaign “could calculate that less likely young voters in Madison, Wisconsin, watched college football, or undecided older voters in Toledo watched Judge Judy, or that persuadable veterans in Tallahassee watched the History Channel.” With this knowledge in hand and message testing to different audiences, the campaign could focus their messages to very specific, desired populations.

But advanced data and targeting are relatively recent phenomenon in presidential politics, and many state and local campaigns still struggle to generate the resources or the capacity to hone messages to desired audiences as described above. Issenberg takes readers through some of the key evolutions that have led to the data integrated into political races today, not all of which are political in nature.


While not the only dates captured in the The Victory Lab, I chose these not because they are necessarily the most significant people or times, but because I think they give a sense of how far data and targeting has come in the world of politics since the 1960s.

1962: The U.S. Postal Service rolls out zip codes that “split the country into 36,000 zones and assigns each a 5 digit code.” For-profit businesses now have a new geographic boundary by which to compile customer data and begin to be sort individual preferences at this smaller geographic level.
1974: The National Committee for an Effective Congress (NCEC) pushes the norm of how voters are categorized in efforts to move more voters to the polls on election day. The NCEC maps political geography through 3 categories:
  1. “A democratic performance index” which predicts how well a democrat would perform on the ballot in a particular precinct;
  2. “A persuasion percent” which shows how much an area fluctuated between parties; and
  3. “A GOTV percent” which measures how much the voter turnout fluctuated from election to election.
These data points were all crafted down to the precinct level and soon adopted as a key tool of the Democratic National Committee.
1980s and 1990s: Hal Malchow, a Mississippian and  political consultant,  starts what we now call A/B Testing  through snail mail. Malchow “would send out slightly different letters to multiple groups of recipients, and then identify which brought in the most [campaign] money.” As Malchow aims to integrate his work into large-scale campaigns, he bumps up against the political consulting establishment which he believes is threatened by this new technique. Malchow voices his frustration with this politics saying it is “only industry in the world where there is no market research.”
1990: The first time U.S. Census releases data by “block groups” for the entire nation, not just urban areas. This gives researchers and campaigns the ability to see characteristics like poverty, income, race, household size, or nationality for much smaller areas, and campaigns begin thinking about how to use this data to inform strategies for voter turnout and voter registration.
1999: Yale University researchers, Alan Gerber and Donald Green, run randomized trials to determine the most effective methods of voter turnout. They find that visiting the home of a registered voter is much more influential in getting them to vote than phone calls or postcards. Issenberg mentions that this shakes up the consulting world – particularly for consultants making their money in phone or mail outreach.
2002: Alex Gage further pushes the use of data in campaign targeting. As a political pollster, he is pulled in for Mitt Romney’s race for Massachusetts governor. For several years before 2002, Gage had worked to link consumer data with more traditional data in voter files. Put together these sources made for a more robust picture of voters. In 2002, Gage creates a “rank-order of the [Massachusetts’] nearly 2 million independents based on their openness to Romney.” He discovers that people ranking high on this independent conversion scale were also HBO subscribers, and the campaign then “sends brochures to everyone shown in [his] files to be a premium-cable subscriber.”

These evolutions in political data highlight several themes: the tension of bringing new technologies into an established campaign environment, the roll of data in generating increasingly refined profiles of voters, and the value of randomized trials in proving the effectiveness of different campaign strategies.


Of these self-selected highlights, I find those that detail new sources of public data – the formation of zip codes or more refined Census groups – the most interesting because I regularly used public data for advocacy and education on state policy issues in Mississippi. The emergence of block data from the U.S. Census has been really helpful for bringing life to issues because advocates are now able to visually depict a trend across areas of interest like cities or counties.

One of the first ways I saw Census data mapped in a compelling way for issue advocacy was at the Mississippi Economic Policy Center. In the mid-2000s, the Center wanted to decrease the prevalence of payday lenders in high poverty areas because of the high cost of borrowing through these businesses. To illustrate the overlap between poverty concentrations and payday lender locations, the Center pulled local level poverty data from the Census and color-coded the neighborhoods by their poverty concentration (see red, yellow and green below). The map provides a more persuasive way of communicating the idea that high poverty communities have closer access to a high cost loan product than the lower cost, more traditional loan products of other financial institutions. The Census Bureau also provides the map templates for the data for free on their site, making the construction of maps even more accessible to individuals and non-profits.

Screenshot 2014-04-08 12.01.37

Block data is also helpful to non-profits or local agencies that provide services to communities in need. If an afterschool program targets one particular neighborhood, that program can start to understand the financial situation of households in their area through looking at Census data. By accessing block data online, non-profits can answer questions like: Approximately what portion of families in our service area rent their homes? What is the average household income of households? How many families are two-parent households? Or What portion of homes in this neighborhood have a car? By reviewing this data, non-profit leaders can then start to narrow down potential services that would be important to their community and then follow up with residents directly through surveying of their own.

So while Census data has surely provided a new layer of information on voters and non-voters, it has also provided a way for leaders of cities, non-profits or state agencies to assess the needs of communities as well.


The revolutionaries are taking a politics distended by television’s long reach and restoring it to human scale – delivering, at times, a perfectly disarming touch of intimacy. – The Victory Lab

As the era of data in politics continues in the early 2000s, Issenberg follows Alex Gage’s work targeting independents and persuadable Democrats on the 2002 Romney campaign for governor, and then traces Gage’s transition to the 2004 Bush presidential campaign where he incorporates similar micro-targeting strategies. By merging both consumer and voter data, Gage and others could determine the characteristics of voters likely to vote for Bush and then seek out non-registered or non-voting residents with similar characteristics. Campaigns could also increasingly customize messages on TV, through mail, or through canvassing that they believed matched the issue preferences of particularly individuals.

Issenberg details that by the late 2000s, political consultants on the other side of the isle were also crafting tools of their own. In 2007, a team of political consultants, labor leaders and researchers created the Analyst Institute, an institution with the aim of merging research on behavioral science on voting with democratic campaign strategy. At a similar time, Catalist also joined forces with democratic races. The formation of both institutions signaled that data in politics was both profitable and increasingly demanded.

President Obama’s 2012 election campaign used the work of consulting partners like these, but also had a large, in-house team of staff members focused on digital strategy. Atler notes that “The [Obama] digital team assembled in Chicago was in fact three teams – Digital, Tech, and Analytics – with interrelated and often competitive functions.” There was a large difference between the Romney and Obama campaigns in theirScreenshot 2014-04-07 18.02.09 budget for digital, and that showed in outcomes measures like their number of donors and the size of the campaigns’ email lists. During the 2012 race, President Obama’s campaign signed up 16 million email list members to Mitt Romney’s estimated 3 million.[2] The Obama campaign also reported four times more donors than the Romney campaign in 2012 – 4.4 million to 1.1 million respectively (see graphic).

The Obama campaign’s decision to invest heavily in analytics also created a culture of message testing that ultimately led to the strengthening of their online fundraising efforts. An online report by Engage Research, “Inside the Cave: the definitive report on the keys to Obama’s success in 2012,” features an example of an Obama campaign email with the subject line “I will be outspent” that raised overScreenshot 2014-04-07 20.43.54 $2.6 million dollars. A remarkable finding on its own; but the email made an even more compelling case for A/B testing of subject lines. Had the campaign sent out their lowest performing email subject line that day, they would have raised $2.2 million less. Examples like these underscore the potential consequences of messaging for any national or state campaign.

Both Issenberg and Atler point to the Obama campaign’s efforts to hire staff from outside of the traditional political campaign environment as critical to developing a more competitive digital strategy. Innovators from Google, Threadless, Accenture and others brought a new lens to the Obama campaign that many deem instrumental to the campaign’s success.


As you read through Issenberg’s account of how campaigns are becoming more sophisticated in their use of individual information, it’s hard not to think about how data of campaigns fits into a U.S. political system that has a widening gap between public leaders on the left and the right.

Tom Patterson, a professor at the Harvard Kennedy School of Government, often emphasizes in his American politics course that the current partisan divide in our nation’s politics has not always been the case (important for someone like me with the political memory of less than 20 years). Below, a slide from his course uses data from the Pew Research Center to highlight that the way Democrats and Republicans view particular issues – the size of government, social safety nets, and equal opportunity – is now (red bar) more divided than it was 25 years ago (blue bar).

 Screenshot 2014-04-08 13.04.05

Patterson has also shown how our media outlets increasingly attract politically divided audiences, from MSNBC and the Daily Show on the left to Fox and Rush Limbaugh on the right.[3] Patterson’s slide below reveals that 80% of Fox News views are conservatives and close to 80% of MSNBC viewers are liberal.

Screenshot 2014-04-08 13.01.17

While these divisions in perspectives and TV viewership may not be particularly surprising to some, it does beg the question of how a national political dialogue that becomes increasingly divided affects our political campaign environment. More particularly, it is worth reflecting on how an increasing division between two parties potentially influences how effective data can be in helping target and win over uncommitted or independent voters. If both parties take positions on further ends of the spectrum from one another, and the political dialogue becomes even more contentious, does recruiting independents or non-voters get harder? If it gets harder, does that increase the need for a political strategy that is beyond what consumer and voter data can supply?

Some of Issenberg’s thoughts in the MIT Technology Review reveal that the level of data we have on individuals may be good enough to transcend the national rift between parties by persuading individuals through local connections. Issenberg quotes David Simas, the Obama campaign’s director of opinion research, as saying: “What [data] gave us was the ability to run a national presidential campaign the way you’d do a local ward campaign. You know the people on your block. People have relationships with one another, and you leverage them so you know the way they talk about issues, what they’re discussing at the coffee shop.”

However valuable a local touch is in campaigns, I would be willing to bet than an increasingly divided political environment and the gridlock in Washington, D.C. are shaping the number of independent or non-voting residents in this country. I would also predict that voter apathy may be a barrier to victory that is harder for data and analytics to resolve in 2014 and 2016.

Until Next Time,


Note: This post is an assignment for Nicco Mele’s Harvard Kennedy School course: “From to Obama 2012: Digital Strategy in Political Campaigns” it reviews Sasha Issenberg’s The Victory Lab and reflects on topics covered in class.


[1] Atler, Jonathan. 2013. The Center Holds: Obama and His Enemies. Simon and Schuster. New York, NY.

[2] Engage Research. Inside the Cave: The definitive report on the keys to Obama’s success in 2012.

[3] Slide taken from Dr. Thomas Patterson’s February 6th, 2014 Kennedy School class lecture on Congress and political parties. Data from the Pew Research Center.