The two questions were tacked on at the last minute to a Survey Research Center (SRC) survey on foreign policy:
“In the presidential elections next month, are you almost certain to vote, uncertain, or won’t you vote?”
(If certain or uncertain) “Do you plan to vote Republican, Democratic, or something else?”
The questions weren’t even the point of the survey. But the answers and their impact helped launch a far-ranging new field of study at the fledgling Institute for Social Research (ISR), establish electoral behavior as a discipline in political science, and shine a light on polling and sampling methodology nationwide.
It was the fall of 1948, and incumbent President Harry S. Truman was embroiled in a grueling campaign against Republican challenger, Thomas E. Dewey. Truman’s popularity was low, even among Democrats, and as the election neared, the national press corps—based in large part on the reports of the three major pollsters, Gallup, Roper, and Crossley—was bluntly predicting an easy Dewey victory.
That’s when SRC unwittingly stepped in. Newly arrived researcher Robert Kahn was working with founder and future ISR director Angus Campbell on a study of public attitudes toward foreign policy for the U.S. State Department. As an afterthought, Campbell and Kahn threw in two questions to gauge the political interests and orientations of the respondents. “There was a great deal of interest in the coming election, so we added those questions,” Kahn says.
Kahn and Campbell ran their survey in October, finishing shortly before the Nov. 2 election. The sample of 610 prospective voters was too small to make any predictions about the forthcoming election, and that wasn’t what their research was about anyway, Kahn says. Still, as the responses to the survey came in and Kahn posted them on a blackboard, a surprising trend began to emerge: The two candidates were running neck and neck, with Truman slightly ahead, and more than 20 percent of voters still undecided. “The thing that was exciting, in spite of our very small sample, was that we were showing almost equal votes for Dewey and for Truman,” Kahn says, “whereas the widely publicized polls—and certainly all the newsprint—were agreed that it was going to be a landslide for the Republicans—for Dewey.”
Kahn and his wife Bea hunkered down by their home radio on election night to listen to the results come in. By late in the evening, broadcasters began questioning the predicted Dewey triumph; by morning, they were announcing one of the greatest upsets in American election history. As Truman, the newly re-elected president, headed from his home in Missouri back to Washington, D.C., the train paused in St. Louis and a photographer snapped the now famous photo of Truman grinning and thrusting out a copy of the Chicago Tribune declaring Dewey the winner.
The very public failure of the predictions shook commercial polling operations to their core. In fact, the negative fallout was so widespread that SRC (soon to be ISR) Director Rensis Likert felt compelled to declare in a Scientific American article that, “it would be as foolish to abandon this field as it would be to give up any scientific inquiry which, because of faulty methods and analysis, produced inaccurate results.”
SRC certainly had no intentions of abandoning the field. Immediately after the election, Kahn and Campbell decided to go back to the respondents who had participated in the first survey. This time, their questions would be firmly focused on how voters had behaved in the just completed election.
With the new data in hand, Kahn and Campbell began to draw conclusions, including the following:
- Pollsters had drastically underrated the importance of undecided voters, apparently assuming they would either not vote or would split along the lines of committed voters. But in fact, late deciders went 2 to 1 for Truman.
- Pollsters misunderstood how much could change in the final weeks or even days of the campaign: Roper stopped polling in September, and Gallup and Crossley in early October. But one-eighth of those who claimed to have voted said they didn’t choose a candidate until two weeks or less before Election Day.
- Pollsters appeared to accept that respondents would do what they said they planned to do, but that often wasn’t the case. Some who said they would vote didn’t; some who said they wouldn’t did. Moreover, a significant number of “committed” voters changed their minds, with more changing from Dewey to Truman.
SRC wasn’t the only organization evaluating what had happened. In the post-election turmoil, the Social Science Research Council convened a group to evaluate what had gone wrong. A few months later, the Committee on Analysis of Pre-Election Polls and Forecasts delivered a verdict that largely agreed with SRC’s conclusions.
In addition, the committee suggested that pollsters had relied on unscientific methods. At the time, commercial polling firms like Gallup and Roper all did quota sampling. Interviewers sought out certain quotas of respondents—such as male or female, young or old—within set geographical areas. But because how they selected those respondents was largely up to them, interviewers might, for example, go mainly to affluent neighborhoods, excluding poor and middle-class residents and biasing the results.
By contrast, SRC used a more time-consuming and costly approach known as probability or random geographic sampling. For the foreign policy survey, they chose clusters of counties across the country, and then randomly sampled the populations in those areas, giving every resident of voting age an equal chance of being chosen. SRC didn’t invent these techniques—they were developed in the 1930s, and one group of researchers had even studied political behavior using community samples in an Ohio county during the 1940 presidential campaign. But SRC was refining them and putting them to new uses.
The hard look at sampling that resulted from Truman’s unexpected win was a turning point in survey methodology. Writing in 1998, Humphrey Taylor, head of polling firm Louis Harris and Assoc., declared, “Virtually all public opinion surveys conducted in the United States since then—whether conducted face-to-face or by telephone—have used some modified version of probability (or random) sampling. Indeed, for American researchers quota sampling is almost a dirty phrase.”
Meanwhile, SRC was reaping the benefits of its small but accurate survey. “It gave a kind of visibility to the organization and its sampling methods that had not been there before,” Kahn says. Campbell launched a new research program to study election behavior with collaborators who eventually came to include Gerald Gurin, Warren Miller, Philip Converse, and Donald Stokes. Their early election behavior studies and other related research were central to the 1970 founding of ISR’s Center for Political Studies, in the process creating the foundation for this branch of political science. And the Michigan Election Studies, which Campbell and Miller started in 1952, would go on to become the National Election Studies in 1977 and the American National Election Studies in 2005, along the way covering every presidential and midterm election since 1956.
As for Kahn, he doesn’t believe the survey particularly boosted his research reputation. He shortly moved on to organizational issues. But there was a side benefit: “Angus and I became friends and colleagues in the process of working on it.”