BR,debate.102320.002

President Donald Trump gestures while speaking during the second and final presidential debate Oct. 22 at Belmont University in Nashville, Tenn.

Election Day is almost here. Poll numbers are flying. Just when you think there is a trend, a new poll comes along showing something different. It’s hard to know what’s really happening.

Earlier this week, one national poll had Joe Biden ahead by 10 points and on the same day, another one had Trump ahead by a point. What to believe?

The best advice: Don’t take any one poll to heart. Don’t assume any one poll is always right or wrong. Look at polls in context, look for trends. That’s why averaging polls is useful.

You ask: Why do survey results differ so much? There are a variety of reasons.

First, there is always the possibility of sampling error, which is part of the scientific method. That explains why multiple polls taken the same time can differ a few points without any of them being wrong.

Second, there is a quality factor. How survey questions are worded, samples selected and interviews conducted can all impact results. High-quality polls cost more; doing a live interview with a cellphone is twice as costly as doing it with a landline, and even more expensive than doing it online.

Also look out for pollsters who cut corners throughout an entire election cycle and then conduct their final poll properly. If that final poll comes close to the actual results, they will be praised for “correctly calling” the election, even though their previous work was shoddy and frequently off beam.

Third, timing matters. Polls are snapshots, not crystal balls. They tell you where things were at the time they when taken; they don’t predict the future. A good example was Wisconsin in the 2016 presidential race. The last two polls had Hillary Clinton ahead by an average of seven points. But these two polls were completed about a week before the election, and that was before late-deciders broke for Trump, who won the state by a tiny margin.

Fourth, some polls are just flat-out wrong. There’s no sugarcoating flawed survey research, especially when questionnaires are biased and samples are out of whack. Example: A statewide poll sample in Louisiana that is only 15% African American, when the statewide electorate is 31% African American, will produce defective results.

While legitimate pollsters always want their work to be accurate, sometimes there are rewards for substandard work. Bad polls are frequently outliers. And, an outlier will often get more media attention than a poll that’s consistent with other surveys. When outliers are released, they hit like bombshells. The media uses them to heighten the drama (“Has Reagan Lost His Lead?”) and partisans use them to prop up optimism when the results favor their side (“New Poll Shows McGovern Has a Chance”).

Of course, just because a poll is an outlier doesn’t mean it’s wrong; it could be the canary in the coal mine.

After the 2016 election, there was a common misperception that the polls were terribly wrong. In truth, they weren’t.

Of the 13 final polls that measured the national popular vote four years ago, 12 put Hillary Clinton ahead by 3.1 points. When the votes were counted, she won the national popular vote by 2.1 points. She lost the presidency because of Trump’s edge in key states that won for him a majority of the electoral votes.

Polls in those key states also came close. In Florida, the average of the final three polls had Trump ahead by three-tenths of a point. He won it by 1.2 points. Ohio’s last poll gave Trump a seven-point lead, and he carried it by 8.1 points. The final polls in Pennsylvania and Michigan showed them to be one- or two-point races, and they were.

Another reason polls got a bad rap in 2016 had nothing to do with actual polling, but with predictive modeling that was often confused with polling.

Well-known modelers — including Nate Silver’s FiveThirtyEight, Upshot at The New York Times and the Princeton Election Consortium — incorrectly predicted Clinton would win. They were also far off in key states. Some observers erroneously equated these “black box” predictions with poll results.

Predictive modeling and election betting markets are based on educated guesswork, not representative samples. They may be fun to discuss, but they should not be taken too seriously.

Election-watchers are now transfixed on every new poll number. But for voters who just want their candidate to win, all they have to worry about is voting — no matter what the polls show.

Ron Faucheux publishes LunchtimePolitics.com, a daily newsletter on polls. He’s an author, pollster and nonpartisan political analyst based in Louisiana.

Our Views: In an ugly election season, Utah sets a civil example