in Election Campaigns
From the viewpoint of those running for public office, election campaigns involve little more than an extensive effort to communicate. Candidates must communicate with other party officials, party members, potential contributors, supporters, volunteers, journalists, and, of course, voters. Ultimately, all campaign activities are secondary to the candidate's effort to communicate with voters. Accordingly, it is not surprising to learn that the largest share of campaign resources are poured into this communication: advertising to send persuasive messages to voters, and polling to learn the concerns that voters have and the opinions they hold.
Over the past three decades of American elections, polling has become the principal research tool for developing campaign strategy. The major elements of strategy consist of the answers to two simple questions: (1) what are the target audiences that a campaign must reach? and (2) what messages does it need to deliver to these audiences? Polling is essential for answering both of these questions.
By and large, the technique most frequently employed for these purposes is the "cross-sectional" survey in which the campaign's polling firm telephones a random sample of citizens and asks them an inventory of standard questions. Sampling theory dictates that, if the citizens are selected at random and are sufficiently numerous, their answers to these questions will deviate only slightly from the answers that would have been given if every eligible voter has been asked. Completing the survey before new, major events change the attitudes of voters is also very important, so most polls will be conducted over a three- or four-day period. That means that a large number of interviewers -- either paid or volunteer -- will have to be used to reach several hundred voters each evening, between the hours of 5:00 and 10:00 P.M.
Surprisingly, most campaign pollsters do not base their sample upon the population of all citizens of voting age. As is widely known, in the United States substantial numbers of eligible voters do not actually cast their ballots on election day. Campaigns have learned through much hard experience that it is more efficient to concentrate their efforts on "likely voters" rather than to try to convince all eligible citizens that it is worth their while to vote. Accordingly, the first few questions on most survey instruments try to ascertain how likely it is that the citizen being questioned will actually vote. The interviewer will thank the unlikely voters and move on to other calls. As a result, campaign communications strategy is built around the interests of likely voters, and campaigns rarely make major efforts to attract votes among hard-core nonvoters.
This step having been accomplished, the first task of the survey is to divide likely voters into three groups: confirmed supporters of the candidate in question, confirmed supporters of the opponent, and undecided. Then, the basic principle of American election campaigns can be reduced to three simple rules: (1) reinforce your base, (2) ignore the opponent's base, and (3) concentrate most attention upon the undecided. That is, in the United States, most of the energy of election campaigns is directed at the approximately 20-30 percent of the voters who may change their votes from Democratic to Republican or vice versa.
Though most candidates are desperately interested in who is ahead, actually the usefulness of the cross-sectional survey goes way beyond simply measuring the closeness of the election contest. Campaigns need an accurate measurement of voter opinions, but they also need to know how to change (or preserve) these opinions. The term "cross-sectional" refers to the differences among groups of citizens; the survey technique is designed to record opinion among the various subsections that differentiate the pool of voters. If there are gender differences in the way voters look at the election, for example, the survey will be able to measure these distinctive attitudes. The campaign that discovers itself doing better with male voters, among all those who have already decided how they will vote, will begin to concentrate their efforts upon men who are still undecided, because those voters are likely to be easier to win over.
By asking many questions about voters' preferences for different public policies the political poll will also provide candidates with insights about the messages they need to deliver to critical groups of voters. Late in an election race, for example, undecided voters may be those who are more cynical about election politics. This result may tempt the candidate to attack his opponent for a poor attendance record or some action that can be pictured as favoring a particular interest group over the general public. In the case of gender differences referred to above, a campaign that is doing poorly among females may discover some special concerns held by women through polling and attempt to devise a message specifically for them.
Normally, the process of deciphering the messages that will move critical groups relies on statistical methods; the answers of supporters, opponents and the undecided are analyzed to determine the strength of the association between candidate support and public-policy attitudes. A strong association is a pretty good indication that the policy area in question may be "driving" the choice of candidates.
Polling is both science and art. Constructing a random sample, designing the questionnaire, fielding the survey instrument, and analyzing the results constitute the science of public-opinion research. All these aspects rely upon well established, validated techniques. The art comes in writing the questions. Question wording can markedly affect the results obtained. Consider, for example, two different questions: "Do you support sending U.S. troops to Bosnia to enforce the recent peace accord?" versus "Do you support President Clinton's plan to send U.S. troops to Bosnia to enforce the recent peace accord?" Voters are likely to react differently to these questions; some opinions will be altered either in favor or against the proposal simply by the association with the president. Which of these wordings is most appropriate depends upon the judgment of the pollster and the purposes of the survey.
In general, when polls are to be used to develop strategy, the consultants labor to write questions that are fair and impartial, so they can achieve an accurate measurement of public opinion. Lately, however, campaigns have been resorting to so-called "push questions" to test possible campaign themes. In these questions, voters are asked to react to questions that have been deliberately worded in very strong language. Consider the following example: "If you knew that one of these candidates had voted to cut welfare payments to the poor, would that increase or decrease the chances that you would vote for him?" When the poll data reveal that many undecided voters back away from a candidate when confronted with this information, then the candidate sponsoring the poll is likely to use this approach in attacking his or her opponent.
Increasingly, political pollsters combine focus-group research with random sample surveys in order to develop campaign messages. In the typical focus group, between eight and fifteen voters will be telephoned at random and asked to participate in a collective discussion on a given evening. In these group sessions, pollsters are able to gather a qualitative, in-depth view of citizen thinking. Often focus groups provide a more complete interpretation of the survey results. Knowing how voters reach their conclusions can be just as important as the quantitative distribution of opinion gathered by surveys. Focus groups can also provide pollsters with question wording that captures the thought processes of citizens, so that the influential messages they work into campaign advertising will have maximum impact.
Behind the scenes, most major political campaigns rely on polling from the beginning to the end of the election race. The typical candidacy will be formulated on the basis of a "benchmark" poll taken during the spring before the fall elections. This expensive survey may take as much as 30 minutes to complete over the phone and will include a large enough sample so that inferences can be drawn about important subgroups of voters. Once the campaign has begun and voters are being bombarded with competing campaign messages, the pollster will return to the field, often several times, using much shorter questionnaires in order to get an idea of how the opinions have changed from the original benchmark.
A number of well-funded campaigns -- usually those for president or for senator or governor in larger states -- have recently begun using "tracking surveys" to follow the impact of campaign events. The pollster will complete, say, 400 surveys on each of three nights. The resulting 1,200 voters constitute an adequate sample with an error rate of around 3 percent. On the fourth night, the pollster calls another 400 voters and adds that to the database, dropping off the answers of those voters reached on the first night. And, this process continues, sometimes for the whole two months of the fall campaign, so that the sample rolls along at a constant 1,200 drawn from the previous three nights. Over time, the resulting database will allow the pollster to observe the effect of campaign events -- such as televised debates or the start of a new advertising theme -- upon voter attitudes and preferences.
If, for example, the lines indicating support for two candidates are roughly parallel until the opponent starts attacking on the basis of character rather than policies, and at the point the two lines start to diverge as the opponent's support increases, then the pollster had better figure out a way of countering the character message being used by the opponent or the race will be lost.
Figuring how to counter the opponent's attack may involve examining particular subgroups in the electorate or it may call for a new message from the injured campaign, but in either case, the response will be based on survey research. Polling, American politicians would agree, has become an essential ingredient of campaign strategy.
F. Christopher Arterton is Dean of the Graduate School of Political Management, the George Washington University.
Very Large Role in U.S. Elections:
Voters, Policies, Tactics, Strategy Affected
Washington -- In the last month of a U.S. presidential election campaign, Americans are inundated with public opinion polls reporting on the candidates' standing in the race.
The use of polling in U.S. political campaigns has gained widespread popularity during the past 30 years.
Polling has become a national institution since psychologist George Gallup founded the American Institute of Public Opinion in 1935. And it has grown to what Karlyn Keene of the American Enterprise Institute in Washington has estimated conservatively to be a $3,000 million industry.
"Properly conducted, analyzed and presented," opinion polls "can increase the independence and quality of news reporting and give the public a chance to help set the agenda of campaigns and define the meaning of elections," says Brookings Institution president Bruce MacLaury in the book, "Media Polls in American Politics."
Candidates for public office at state and national levels have turned increasingly to public opinion surveys for guidance in their campaigns. Political analysts agree that well-conducted public opinion polls can yield valuable insight into the electorate, telling candidates how they are perceived by voters, what issues are important to the electorate, and where their strongest supporters reside.
"There are two very basic but different uses of polls," says Republican pollster Richard Wirthlin. "There's the descriptive function, which basically tells a candidate and his staff who's ahead and who's behind, what people feel the most important issues are....
"But more important to campaigns and candidates are the strategic polls -- not to assess where the campaign is at any moment in time, but rather what the candidate should do to change the vote successfully -- dealing with perceived weaknesses or strengths, determining key constituents by demographics, geography, lifestyle."
Some scholars trace polling in U.S. elections all the way back to the early 19th century. A magazine, Literary Digest, tried to ascertain voter preference systematically beginning in 1916, and, in samplings of its own readers, predicted the winner of the presidential election for five straight elections.
But in the 1936 race between incumbent Democratic President Franklin Roosevelt and Republican Alf Landon, Literary Digest erred grievously, predicting a Landon landslide when, in the actual election, Roosevelt won by a large margin, carrying every state but two.
In 1948, the pollsters -- using sampling techniques designed to represent the total adult population -- also failed in their predictions for the outcome of the presidential contest between Democratic incumbent Harry Truman and his Republican opponent Thomas Dewey.
Two top polls, Gallup and Crossley, favored Dewey by about 50 percent to 45 percent, and a third, the Roper poll, said he would win by 52 percent to 37 percent. When the voting results showed the polls were wrong again, a study by the Social Science Research Council cited three major polling errors: A disproportionate number of college-educated voters had been polled; polls were not conducted through the final stages of the campaign; and the large number of undecided (15 percent) were erroneously eliminated or distributed among the candidates.
Since 1952, the major polls have correctly forecast the winner in each election except in 1960 and 1968, when the samplings suggested the races were too close to call.
By 1968, political polling had become a virtual monopoly controlled by the Gallup, Roper and Harris organizations. But then newspapers and television networks joined the endeavor, forming partnerships: the New York Times/CBS, Washington Post/ABC, and Wall Street Journal/NBC.
Al Richman, senior research specialist at the U.S. Information Agency, who has spent about 20 years analyzing American opinion, notes that "there has been a quadrupling of the polls available since the 1970s." He says at least six organizations are now polling the presidential race at least once a week. Since the beginning of October, Richman notes, ABC and USA Today have been conducting "tracking polls" daily, and will continue to do so until the election.
Most public opinion polling is now done by telephone, because that method is faster and cheaper than doing interviews in person. In developing a sample of individuals to be surveyed, the poll taker must reach not only an adequate number of people, but also enough of the "right kind of people -- those representative" of the voters in November.
Some polls report registered voters, other "likely" voters. However neither approach is foolproof, because people don't always tell the truth about their voting history.
Published polls typically include a warning that the poll has a margin of error of plus or minus 3 or 4 percentage points. This "sampling error" is based on the fact that not everyone in the targeted population has been questioned.
Sampling errors are not the only things that can throw a survey off. Even what could be characterized as trivial matters related to the age, sex or race of the interviewer conducting canvassing by telephone can affect a survey's results.
The order in which policy questions are asked also can have a significant impact on the results. Pollsters have learned that responses to emotional questions will influence responses to later questions.
The phrasing of questions also can affect poll results. In 1985, the National Opinion Research Center at the University of Chicago conducted two national polls. In one, the question was whether too little money, too much or about the right amount was being spent on "welfare" in the United States. Only 19 percent said too little was being spent. In the other poll, the question was whether too little, too much or about the right amount was being spent on "assistance to the poor." More than 60 percent said too little was being spent.
And the time interviews are conducted can have an impact on the results. The 1984 Reagan campaign discovered that Republicans are slightly more likely than Democrats to go out on Friday evenings. Therefore Friday polling tends to over-represent Democrats.
In 1992, the polls repeatedly showed the issue of central concern to American voters was the national economy. Richman points out that six polls concurred in ranking the issues the public most wanted debated by the 1992 presidential candidates: first, economic issues, particularly unemployment and the federal budget deficit; second, domestic and social issues, particularly health care and education; and, a distant third, foreign affairs and defense issues.
A review of 1992 polls shows then Democratic presidential contender Bill Clinton had maintained a lead over Bush on economic issues. However some polls showed independent candidate Ross Perot moving ahead of Clinton on handling the economy and budget deficit. Bush clearly led in the polls on handling foreign affairs.
Richman notes, "While most Americans give highest priority to dealing with U.S. economic and social issues, various polls show that a majority continues to favor active U.S. involvement abroad. Examination of numerous trend-measuring questions does not support the notion that the American public is 'drifting toward isolationism.' [In 1992,] most Americans - about 60-70 percent - continue[d] to favor an active, cooperative role in world affairs, including 'full cooperation' with the United Nations and the selective use of military force. Americans are demanding more action on the domestic front but no disengagement from the rest of the world."
Based on more than 30 polls reported between September 1-October 18, 1992 Richman says, Clinton averaged about 45 percent; Bush, 34 percent; and Perot, 12 percent. But eight polls between October 19-25, Richman notes, show Perot had gained almost seven points, at the expense of both Bush and Clinton. These polls showed Clinton with an average of 42 percent, Bush, 32 percent; and Perot, 19 percent.
While the publishing of polling results sometimes lags behind the changes in public opinion, the 1992 polls were fairly accurate in predicting the final outcome. On Election Day, Clinton received about 43% of the popular vote, Bush gained a little more than he had been averaging in the polls, and Perot only lost a few percentage points over what the polls were predicting, within the margin of error.
Back to Public Opinion Research.
Environmental Media Services
1320 18th Street, NW, Suite 500
Washington, DC 20036
©1999 Environmental Media Services