APP - Should you trust what pollsters are telling you?

Legion

Oderint dum metuant
Should you trust what pollsters are telling you?

You may have noticed this: a new poll comes out, often a poll that was commissioned and published by a media company – and it gets heavily publicized by the media.

This is known as a feedback loop.

The methodology of polling is itself flawed. So is analysis by the humans conducting it.

Personal experience and personal beliefs get in the way. Many of the journalists who dismissed Trump with projections that he had a mere 2% chance of winning simply didn’t know any potential Trump supporters in their personal lives, they didn’t get it. Their personal beliefs also encouraged a bit of wishful thinking. They didn’t want to get it.

Until someone invents a perfectly objective way to conduct polls, asking neutral questions and communicating them with neutrality, well, you’ve got a polling problem. Humans, flawed as they are, produce polls that are imperfectly designed, imperfectly conducted and imperfectly analyzed.

Polls are not a crystal ball but (at best) a snapshot of right now. Even the wording of polls reflects this focus on the present rather than the future: “If the presidential election were today, for whom would you vote?”

Consider how different your present and future answers would be to questions like: “If you had to eat your lunch now, what would you eat?” or “If you had to choose a hot babe to fuck right now, who would you pick?” You might say that food, romance, and politics aren’t all that similar, but the answers all point to a basic and consistent truth about human preferences: things change.

Exit polls are also flawed. There is no opinion poll that tells you what people actually did in the privacy of a voting booth on election day, and nobody but the voter themselves knows for sure how they voted.

In 2013, 41% of US households had a cellphone but no landline and that number is on the rise. This poses a problem for polling companies because the 1991 Telephone Consumer Protection Act means that they can’t just auto-dial those cellphones. Not being able to autodial means that it is incredibly expensive and time-consuming for companies to poll. Even more problematic, younger households and poorer ones are much less likely to have a landline. Pollsters are bullshitting when they say they can take the pulse of the nation with a diminished and highly selective sample.

Response rates (that’s the percentage of people who answer a survey when asked) have plummeted. In the 1930s, it was over 90%; in 2012 it was 9%, and it has continued to decline since then. What’s more, the US population has grown 2.5 times larger since the 30s so the overall participation rate (the percentage of the total population that ends up actually completing a survey) has fallen even faster.

In the end, you’re left with about a thousand adults (if you’re lucky) who are taken to be representative of the approximately 225 million eligible voters in the United States. All of this is well known to pollsters, who claim that their complex mathematical methods can correct for these shortcomings. But rarely have I seen the question asked: “What if a certain type of person answers polls?”

Whether they are cold calling or they have created a panel of individuals that they can repeatedly survey who are very likely getting paid (but not a lot), the results are unreliable. Why? What if the sort of people who are willing to spend time on their cellphone answering questions for little or no reward have something in common?

Of course, the pollster can always either call a cherry-picked list of people in order to get a predictable answer, or simply discard any responses that they don't want. How would you find out?

All of these things together mean that getting a random survey sample that is nationally representative is incredibly difficult.
 
I take them for what they are.
Some are of no value at all, others offer some insight and the odd few are enlightening.
 
Back
Top