Not all polls are created equal. How can journalists spot a bad or biased poll and avoid amplifying it?
- Treat pollsters and polling companies the same as any other source. Just like any source, pollsters can get facts wrong. And sometimes there are bad actors that are not taking a rigorous scientific approach to their data. Therefore, journalists should implement a strict vetting process. Start by looking at the pollster’s track record.
- Know what type of sampling was used. Dr. Kyler J. Sherman-Wilkins, assistant professor in the Sociology and Anthropology department at Missouri State University, said that it’s imperative to know what type of sample was used in order to contextualize the data. Some examples include:
- Look at the volume of questions in a poll. Fernand Amandi, managing partner of the public opinion research and strategic communications consulting firm Bendixen & Amandi, suggests using shorter polls because not everyone is willing or able to answer too many questions. “What we’re finding is the average person who will sit through an interview that takes 20 to 30 minutes to do a 100-question poll, or an 80-question poll, is not representative of your average person,” he said.
- The margin of error is like the quality assurance of polls. Journalists should look for polls with higher sample sizes. Why? There is an “inverse relationship” between sample size and margin of error: The lower the sample size, the higher the margin of error.
- Context is everything. Population size matters. A sample poll of 300 interviews looks different from a small town with a population of 10,000 to a national study of 300 million Americans. Amandi recommends a minimum of 800 interviews for national samples.
To hear more strategies for uncovering bias in polls, watch the “Numbers Are Not Neutral” program presented by the Journalism Institute and the National Association of Science Writers.