Making A Difference

The Opiate Of The Electorate

Our addiction to opinion polls has done more than enhance the already unacceptable power of the media; it has also redirected our attention and efforts away from policy and toward trivial personality contests at a time when much is at stake.

Advertisement

The Opiate Of The Electorate
info_icon

If your anti-Bush sentiments have turned into electoral passion, then you probably restrained your exhilaration after last Thursday's debate until you got a sense of how it played to the American electorate; which means, how it played in the polls that began to pour out only moments after the event ended. The first "instant" polls seemed to indicate a Kerry victory, and by Sunday the Newsweek poll (considered notoriously unreliable by the pros) had appeared with the news that Kerry had pulled even or might be ahead in the presidential sweepstakes. If it was then that the real rush of excitement hit you, face it, like a host of other Americans, you're a polls addict.

Opinion polls are the narcotic of choice for the politically active part of the American electorate. Like all narcotics, polls have their uses: they sometimes allow us to function better as political practitioners or even as dreamers, and don't forget that fabulous rush of exhilaration when our candidate shows dramatic gains. But polls are an addiction that also distort our political feelings and actions even as they trivialize political campaigns -- and they allow our political and media suppliers to manipulate us ruthlessly. The negatives, as pollsters might say, outweigh the positives.

But let's start with the good things, the stuff that makes people monitor polls in the first place, relying on them to determine their moods, their attitudes, and their activities. The centerpiece of all that's good in the polls lies in the volatility of public opinion, a trait that polls certainly discovered. The scientific consensus before World War II had it that political attitudes were bedrock, unchanging values.

Take, for example, Bush's "job rating", as measured by that tried and true polling question: "How would you rate the overall job President George W. Bush is doing as president?' The Zogby Poll's results are typical; until September 11, 2001, the President had low ratings -- about 50% of Americans rated him "excellent" or "good." Then his approval ratings surged to a stratospheric 82%. This makes sense; people rally around a president during a time of crisis.

What happened next is harder to explain. Despite the fact that wartime presidents almost always have huge support for the duration of the conflict, Bush's approval rating began a sustained decline, losing 20 points in the next 12 months (leading up to the first anniversary of 9/11) and another 12 points the following year. By September 2003, his approval rating had hit the 50% level again.

Virtually every group of political activists quickly grasped the significance of this decline: Something surprising was happening to our "war president." In this case, the polls helped to inspire peace activists to rebuild a quiescent anti-war (or at least anti-Bush) movement, because they knew (from the polls) that the decline in his approval rating was largely due to the war. The same figures convinced a whole host of important Democratic politicians to declare for the presidency, bringing well-heeled financial backers with them. And they triggered a campaign by Karl Rove and his posse of Bush partisans to discredit Bush's attackers.

Poll results can be a boon to informed and effective politics; they alert activists and others to the receptiveness of the public on important issues. But the key fact that makes polls valuable -- that public opinion is a volatile thing -- also turns polls into an addictive drug that distorts and misleads. Once the addiction forms, we all want to know (immediately, if not sooner) the "impact" of every event, large or small, on the public's attitudes, so that we can frame our further actions in light of this evidence. And this responsiveness means that instead of sustained organizing around important issues that can have long-lasting impact on political discourse, we increasingly go for the "quick fix," especially attention-getting gimmicks that can create short-term shifts in the public-opinion polls which then, of course, feed more of the same.

Blunt Instruments

The use of polls to determine the immediate impact of less-than-monumental events is a fruitless -- and often dangerous -- enterprise. There are two interconnected reasons why this is true. First, polls are at best blunt instruments. They can measure huge changes over time, like the enduring shifts of 30%, 20% and 12% in Bush's ratings, but they are no good at measuring more subtle changes of opinion in, say, the 3-5% range. As the famous (and much ignored) "margin of error" warning that goes with all polls indicates, this incapacity is built into the technology of polling and cannot be eliminated by any means currently available. One sign of it is the often-used phrase in news reports that a 3% difference between candidates is a "statistical tie" (which everyone promptly ignores and which in any case might actually indicate a 6% difference in the candidates). And that 3% "margin of error" is only one of five or six possible inaccuracies. The sad fact is that even a 15% difference between two candidates might not exist, unless it is replicated over time and/or across several different polls.

Let's take an example that, for most people, no longer carries the emotional weight it once did -- the 2000 election. If you had consulted the Gallup poll on most days late in that campaign, you would not have known that the vote would prove to be a virtual dead heat. On October 21, with a little more than two weeks to go, Gallup did show Gore ahead by 1%. Three days later, Bush had surged in the same poll and was ahead by a staggering looking 13%. The election appeared to be over.

We now know that this surge was a blunder by Gallup. For one thing, other polls simply did not record it. But more important we know that, as volatile as public opinion can indeed be, it is not nearly this volatile, except under the stimulus of events like 9/11. This "surge," like virtually all such surges, actually reflected the fundamental inability of polls to measure day-to-day changes in attitudes -- especially voting intention. This is so because of all sorts of arcane polling problems which would take a semester of graduate school to fully review. But let's look at just two examples.

Consider, for instance, the fact that many young adults party on Thursday, Friday, and Saturday. Since the trends recently have been for young singles to be Democratic, you can expect fewer Democrats and more Republicans to be home during polling hours on those days. And that's but a single example of changes in polling audiences. Daily polls, in other words, often record large fluctuations in attitudes because questions are being asked of very different audiences. Even time of day can make a big difference. (Think of who is at home on Sunday afternoons during football season.) This, in turn, forces pollsters to make all sorts of adjustments (with fancy scientific names like "stratified sampling" and "weighted analysis"). And these adjustments are problematic; in the context of daily electoral polls they often add to that margin or error instead of reducing it.

No One Knows Who is Going to Vote

There are lots of other problems, but the big kahuna, when it comes to an election, is that we only want to interview people who are actually going to vote (a little over 50% of all eligible voters in a typical presidential election -- and possibly closer to 60% in this atypical year). One way to eliminate the non-voters is by looking only at registered voters, but that is just a partial solution, since in most elections fewer than 80% of registered voters actually vote. What pollsters need to find out is: Which of those registered voters are actually going to vote. This is made particularly crucial, because while there are a great many more registered Democrats than Republicans, the Republicans usually narrow that gap by being more diligent in getting to the polls.

But there is no way to figure out accurately who is going to vote. Going to the polls on Election Day is a very complicated phenomenon, made even more so this year by the huge number of new registrations in swing states. It is almost impossible for pollsters to know who among these new voters will actually vote. While many potential voters have a consistent track record -- always voting or rarely voting -- many others are capricious. For these "episodic voters," factors like weather conditions and distance to the polls mix with levels of enthusiasm for a favorite candidate in an unstable brew that will determine whether or not they get to the polling station. In fact, who is "likely to vote" actually varies from day-to-day and week-to-week and there's just about no way of measuring (ahead of time) what will happen on the only day of the only week that matters, November 2.

Pollsters, in fact, are really in a pickle. If they rely on previous voting behavior (as many polls do), they're likely to exclude virtually all first-time voters. Since the preponderance of newly registered voters are young singles (who, we remember, tend to be Democrats), they will be underestimating the Democratic turnout. So many polls (including Gallup) ask episodic and first-time voters about their enthusiasm for their candidate and their commitment to voting, in order to weed out those who have little real interest and very little energy for dragging themselves to the polls.

But this creates new distortions. For example, a big news story, including a polling-influenced one like the recent Bush "surge," can suddenly (but usually briefly) energize potential new Bush voters, turning them into "likely voters"; at the same time, it may demoralize Kerry backers, removing some of them from the ranks of "likely voters." Two days or two weeks later another event (the first Presidential debate, any sort of October surprise, or you name it) may create an entirely different mixture. And come election time, none of this may be relevant. On that day the weather may intervene, or any of a multitude other emotions may arise. So "likely voter" polls are always extremely volatile, even though the underlying proportion of people who support each candidate may change very little.

What this means is that a large proportion of all dramatic polling fluctuations --this year and every year -- are simply not real in any meaningful sense. But this does not stop election campaign managers and local activists from developing or altering their activities based on them, which only contributes to a failure to mount sustained campaigns based on important issues, while focusing on superficial attention-getting devices.

You Can't Tell Which Poll is Right

This leads us to the second huge problem with polls: Different polls taken at the same time often produce remarkably different results. Fifteen percent discrepancies between polls are not all that rare. If a group of polls use just slightly different samples (all of them reasonably accurate), slightly different questions (all reasonable in themselves), and slightly different analytic procedures (all also reasonable), the range of results can be substantial indeed. If, in addition, they call at different times of the day or on different days of the week, the differences can grow even larger. And if they use different definitions of "likely voters," as they almost surely will, the discrepancies can be enormous.

To see how such a cascade of decisions really screws up our ability to rely on polls, consider the now famous "bounce" that Bush got from the Republican Convention. The media, using selected opinion polls, conveyed the impression that Bush surged from a "statistical tie" to a double-digit lead. Many of my friends -- Kerry supporters all -- felt the election was lost. (Some of them would certainly have fallen from the ranks of Gallup's "likely voters"). Things got so bad that Michael Moore sent a letter to all the Kerry supporters he could reach, telling them to stop being crybabies and get back to work.

This is a prime example of the polls having a profoundly detrimental effect on public behavior, because the bounce for Bush was moderate at best. In fact, the most reasonable interpretation of the polls as a group suggests that there may have been a shift in public opinion from slightly pro-Kerry (he may have had as much as a 3% advantage) to slightly pro-Bush (perhaps as much as 4%). A plausible alternative view, supported by a minority of the reliable polls, would be that the race was a "statistical dead heat" before the convention and remained so afterward, interrupted only by an inconsequential temporary bounce.

To see why a moderate interpretation is a reasonable one, you need to consider all the polls, not just the ones that grabbed the headlines. I looked at the first 20 national polls (Sept 1 to Sept 22) after the end of the Republican convention, as recorded by PollingReport.com, the best source for up-to-date polling data. Only three gave him a double-digit lead. Two others gave him a lead above 5%, and the remaining 15 showed his lead to be 4% or less -- including two that scored the race a dead heat. In other words, taking all the polls together, Bush, who was probably slightly behind before the convention, was probably slightly ahead afterward. Certainly the media are to blame for our misimpression, but before we get to the media, let's consider how various polls could disagree so drastically.

Fortunately there are some energetic experts, especially Steve Soto and Ruy Teixeira, who have sorted this discrepancy out. The bottom line is simple: the double-digit polls far overestimated the relative number of Republicans voters. Gallup, the poll that has been most closely analyzed, had 40% Republicans in their sample of likely voters, and only 33% Democrats along with 27% Independents. This might seem okay to the naked eye, but it turns out that in the last two elections, about 4% more Democrats than Republicans trooped into the voting booths; and this, logically enough, was the proportion that the other polls used. Since 90% of Republicans right now claim they will choose Bush and 85% of Democrats say they will choose Kerry, this explains the gross difference between Gallup and most other polls; Gallup, that is, would have given Bush a 4% lead if it had used the same party proportions as the other polls.

How, then, could Gallup do such a thing? Though Gallup's explanation is complicated, it relies on the fact that, until Election Day, nobody can actually know how many Republicans and Democrats are going to show up at the polls. All polling agencies are actually predicting (or less politely, guessing) how many Democrats and Republicans will vote. Scientific and journalistic ethics might seem to dictate basing your present guesses closely on past elections, but Gallup can always simply claim that their information suggests a shift toward Republican affiliation and/or a much higher Republican turnout. In this case, the lack of any substantiating evidence for such a claim has led to accusations that Gallup's decision was politically motivated.

But in some ways, those exaggerated Gallup results are only a side issue when it comes to polls and this election. Don't lose track of the fact that even the "good" polls show a startling range of results that renders them almost useless in accurately determining the relative position of the candidates. Remember… the post-convention non-corrupt polls still ranged from zero to 8% in favor of Bush. That spread may sound modest, but in real-world terms its extremes represent the difference between a dead heat and a landslide. And there is really no way to tell who is right. In addition, because the media are under no obligation to report all of them, they can select the poll or polls that come closest to their predilection (or that simply offer the most shock or drama) and present them as the definitive results, ignoring or suppressing those that offer a contrasting portrait of the situation.

To see how pervasive this problem is, consider this sobering fact: The media have been reporting that the first debate pulled Kerry back into a "statistical dead heat." This is a source of exhilaration in the Kerry camp and (if we can believe media reports) significant re-evaluation in the Bush camp. It has certainly affected the moods of their supporters. But there is a good chance that this Kerry bounce was inconsequential. According to Zogby and Rasmussen -- two of the most reliable and respected polling agencies -- the Bush lead had already devolved into a "statistical dead heat" and the debate had no significant impact on the overall race.

Granted, these two polls are a minority, but in polling, unfortunately, the minority is often right. For a vivid example, consider the polls taken the last weekend before the 2000 presidential election. Since the election itself was a virtual dead heat, well conducted polls should have called it within that 3% margin of error -- with some going for Gore and some going for Bush. But that is not what happened: PollingReport.com reports scientifically valid polls taken in the last weekend before the 2000 presidential election. Fully 17 gave Bush a lead, ranging from 1% to 9%, while only two predicted that Gore would win (by 2% and 1%); one called it a tie. Even if you remove the absurd 9% Bush advantage, the average of the polls would have been a Bush would win by 3% -- which in our Electoral College system would have translated into something like a 100 vote electoral majority. In other words, even in a collection of the best polls doing their very best to predict an election, the majority was wrong and only a small minority was right.

Consider then that there are three extant interpretations of what has happened since just before the Republican Convention. In one rendering, promulgated almost unanimously by the media, Bush experienced a double-digit convention surge and held onto most of this lead until Kerry brought the race back to even with his sterling debate performance. This widely held interpretation is almost certainly wrong, but two plausible interpretations remain. The first, supported by the preponderance of polls, tracks a modest post-convention bounce for Bush and an offsetting modest bounce for Kerry after the initial debate.

Advertisement

Tags

    Advertisement

    Advertisement

    Advertisement

    Advertisement

    Advertisement

    Advertisement