What’s the Point of Polls?

The following piece by Brooke Hempell, Barna’s senior vice president of research, is an excerpt from Barna Trends 2018 (now available), an annual collection summarizing Barna’s recent major research studies through analysis, interviews and infographics.

In July 2016, my family and I traveled to the United Kingdom to visit friends and colleagues for a few weeks. We arrived in Central London, tired from an overnight flight, to be greeted by an enormous protest blocking all routes to our hotel. We would soon find out this was one picture of the response to Brexit. London was largely in shock, as almost no one thought the U.K. would actually vote to exit its collective European neighbors, and few in the most populous city had taken the vote seriously. After all, pollsters had largely failed to predict this turn of events.

Fast-forward six months, and a similar scenario played out in the United States. Americans watched, many incredulously, as Donald Trump was elected president in a vote that seemed to defy nearly all polling predictions. How did so many polls get it so wrong? The answer lies in a combination of social science principles and modern media phenomenon.

First, let’s review the constraints of polling. A poll’s validity relies on a few things, each of which is fraught with potential missteps that can significantly impact the credibility of a poll.

Spiritually Open

1. The Representativeness of the Sample

First, in our current environment, getting a sample of Americans who actually represent widespread popular opinion can be harder than you would imagine. It used to be that a good poll relied on randomly dialing home phone numbers across a geographically diverse listing. Today, with 90 percent of Americans owning a cell phone, and half of households no longer having a landline, sampling from the “phone book” is no longer representative. Even though polling companies have drastically increased the proportion of cell phones in their polls, this method is still challenging because people’s area codes often no longer represent where they currently live, many numbers are unlisted, and many people will not answer a call from an unknown caller.

Getting a sample of Americans who represent popular opinion can be harder than you would imagine. Click To Tweet

Also, people in rural areas or over age 65 are significantly more likely to have a landline; Millennials, Hispanics, people in cities and those in a lower socioeconomic bracket are less likely to have a landline. This is a very important balance to get right for, say, an election poll, as the former group (with higher landline penetration) are more likely to vote Republican and the latter group (with lower landline penetration) are more likely to vote Democrat.

Further challenging polls, the typical response rate (the percentage of people called who will agree to complete a phone sur- vey) has dropped, from one-third of Americans in 1997 to just 9 percent today, according to Pew Research data. More people unwilling to answer a phone poll means more difficulty in ensuring a representative sample via phone.

In many cases, it can be easier to get a representative sample online than via phone, as more households have internet access than a landline. However, people who are older, Hispanic, African American or live in rural areas are less represented online.

Considering all of the above, getting to an equal starting place—a representative sample of the population—is not that easy!

2. Understandable and Unbiased Questions

Next, the validity of questions asked can drastically skew findings. For example, a 2006 New York Times poll asked, “Do you favor a gasoline tax?” Just 12 percent of adults answered yes. But when the poll asked, “Do you favor a gasoline tax to reduce U.S. dependence on foreign oil?,” 55 percent answered yes. “Do you favor a gasoline tax to reduce global warming?” yielded 63 percent agree- ment. Political pollsters are notorious for asking biased questions to get responses that will produce evidence of support for lobbying or political agendas. When examining polling results, it’s essential to be on the lookout for bias in question wording.

Other items that can bias responses include the order of questions (other questions in a survey may bias later question responses) and simply conducting an interview via phone (a respondent is sometimes compelled to give the more “socially acceptable” answer instead of their real opinion when a live person is on the other end of a call).

CoLab: Sharing Jesus

3. The Reliability of Respondents’ Predictions of Their Future Behavior

Finally, and very importantly, predicting future behavior is extremely difficult! In market research, when we aim to predict future uptake of a product or endorsement for a nonprofit, for example, we ask about propensity to do various things and then construct probability models with multitudinous caveats and assumptions built into them. This is because people are very bad at predicting their future behavior, especially around something like choosing a political candidate.

People are bad at predicting their future behavior, especially around choosing a political candidate. Click To Tweet

Many factors can influence one’s decision up until the last minute, and inevitably the decision will be much more emotional than logical (an idea which gave birth to an entire field of study: behavioral economics).

Furthermore, the action of going to vote is also unpredictable. Again, this is true of any model or research that tries to predict behavior: Despite our greatest intentions, “life gets in the way.” Over half of eligible voters (61%) actually cast a ballot in November 2016, yet, if polled, presumably more than this would have listed their intended vote. In studies on this topic, people give many reasons why they do not make it to the polls: they couldn’t get off work, got stuck in traffic, had a sick kid at home or simply didn’t feel like making the effort on the day. In the 2016 presidential election, for instance, the latter excuse is commonly cited for lower-than-anticipated Democratic voters.

Additionally, factors such as media coverage, peer influence, a compelling Sunday sermon or one’s own conscience can sway people in the opposite direction as well—to go out and vote when they might not have been so inclined in the past. In sum, predicting whether someone will vote at all is probably more difficult than predicting how they will vote in an election.

So, with all these risk factors, is there any point in polls?

At Barna, well, perhaps we’re showing our own bias, but we believe polls have an important role to play in understanding our current “post-truth” context—at least to the degree that they illuminate and interpret popular public opinion rather than attempt to predict it.

Your cart
Clear Cart
Shipping and discount codes are added at checkout.