Gallup’s mea culpa this week and yet another release of 2016 trial heats reminds us that the biggest threat to the health of public opinion polling may not be shrinking response rates or the rising cost of dialing cellphones, but our growing addiction to its results.
Some news organizations are considering using Survey Monkey. These organizations in the past would have scoffed at the idea of doing online or unscientific opinion research. Now, because of their shark-like need to constantly poll or die, they’re talking with a company whose core business is online employee satisfaction surveys.
Velocity and critical mass have thrust us through a sort of survey sound barrier. Thanks to the 4G speeds at which news is disseminated, new survey brands pop up and results are cranked out, polls no longer just document public opinion. They help drive it.
Of course, what keeps polling from becoming a completely self-fulfilling prophecy is reality. Reality can be a 2012 electorate that defies even the best-known pollster’s tracking. It also can be fluky polling based on a skunked sample, or recidivist bad polling by outfits that cut corners to work cheaply or don’t know enough to conduct sound surveys.
The poll-as-prophecy dynamic is easiest to spot in isolation. Gallup tracked the 2012 presidential horse race in a flooded zone, but its reputation and daily results meant its numbers were among the most followed. After a series of tracks that were “consistently the most favorable to [Gov. Mitt] Romney among the national polls,” as Pollster.com’s Mark Blumenthal concluded, Gallup’s final one-point edge for Romney crashed against Obama’s 3.7-point winning margin.
But Gallup won’t be remembered for that nearly five-point gap—it will be remembered for projecting Romney as the winner. Had it erred on the side of a win for President Barack Obama, folks there would be just as bent on implementing fixes but probably wouldn’t be calling a news conference to announce them. No one remembers the margin by which Dewey “beat” Truman.
In the recent special election in South Carolina’s 1st district, a late April Public Policy Polling (D) survey showing the Democrat leading by nine points single-handedly changed expectations for a race that had been seen as favoring former Gov. Mark Sanford (R), flawed a candidate as he was. It helped that the news media were ready to believe the purported surge by Elizabeth Colbert Busch because of Sanford’s serial bad judgment.
“The PPP poll showing Colbert Busch ahead by nine definitely set expectations for the final week, and arguably gave Sanford some of the ‘underdog’ mojo he used effectively in the final days,” says my colleague David Wasserman. “Democrats didn't want to raise expectations too high, and Republicans wanted nothing to do with Sanford, so it was a rare situation where PPP was the only thing most media had to go by.”
Unlike the South Carolina special, however, most big races are heavily, not lightly polled, and most pollsters aren’t Gallup. Just the opposite: more and more survey brands are little known. And despite varying quality, frequency and formats—long-form or tracking, live-interview or robo-call, partisan or nonpartisan—polls are increasingly given equal attention. Based on frequency alone, Gallup’s tracking, along with Rasmussen’s, can skew perceptions. (My poll-junkie husband likens them to the permanent members of the UN Security Council.)
This is where the poll aggregators come in. They argue that averaging together individual trial-heat results allows sound surveys to compensate for sketchier ones. Some even take pains to try and address the issue of tracking-poll volume. Their case is backed up by the relative accuracy of their averages. But their work diminishes the value of the individual sound surveys.
As more cheaply conducted polls are given the same attention as expensive ones, two risks loom larger on the horizon. First, we may see declining incentive to invest in the types of public opinion surveys that are more expensive, more reliable and more insightful. Second, polling over time may become less informative and more of an echo chamber.
During a recent panel discussion on polling and aggregators hosted by the American Association for Public Opinion Research, ABC News director of elections Dan Merkle raised a concern: what if some pollsters start weighting their results so they are more in line with the aggregators’ averages?
As a rule, pollsters don’t want to be outliers—look at what Gallup has gone through. Over the years, we’ve seen certain pollsters produce trial heats that are outliers at first, then magically fall in line with the majority of other polls shortly before a vote.
In a May 2012 paper for the Center for the Study of Democratic Institutions, authors Joshua D. Clinton and Steven Rogers found “suggestive evidence” that robo-pollsters “may take cues” from live-interview polling “given the stakes involved.” Their sample was small—surveys taken during last year’s GOP presidential primary after New Hampshire up until Romney became the effective nominee—but their results were suggestive.
If Merkle’s proposed scenario becomes reality, there’s no way to guard against it. The news media would give the polls their 15 minutes, aggregators would pour them into their blenders and eventually, polls of the public would gradually be supplanted by tail-chasing polls of polls. The more polls that are conducted, the more nearsighted we may become.
Halfway around the world, the potential for polls to impact public opinion is being misapplied for efforts to quash objective survey research altogether. The New York Times reported on May 21 that Russian prosecutors “are threatening to shut down the country’s only independent polling agency because it allegedly ‘influences public opinion and therefore does not constitute research but political activity.’”
And in China, per a recent Straits Times column forwarded by a security analyst friend, propaganda chiefs issued a May report discouraging independent inquiries, especially at universities and by the media, into areas they have deemed off-limits. Among them: “universal values, press freedom, civil society, civil rights, errors of the Chinese Communist Party (CCP), [and] crony capitalism”—all the typical stuff of public opinion polling.
Given what’s going on over there, an overabundance of independent polling here is a burden we may be blessed to bear. But our insatiable jones for polling may lead us to mortgage the long-term future of public opinion research for the sake of trying to master the near future in politics. That may not be a worthwhile trade.