Analysis: Even in a close election, random chance means polls should be showing a broader range of results. That raises the question of whether we’re in for another polling surprise.
I'm Gen Z. I have never been polled. I've asked probably close to 20 of my other Gen Z friends if they've ever been polled. Literally none of them had been, even the ones in swing states.
From that, I gather Gen Z is massively underreported in polls. Good news is, it seems like at least 3/4 of us, if not more, are very left.
BUT good poll results aren't just "we polled 1,000 people and here's who they're voting for."
Good pollsters take demographic data when they poll. They model the biases of different demos, and they correct for those biases in their models.
Yes, reducing underrepresentation at poll time would be ideal. But pollsters are smart and are doing their best to put out good models. Pollsters know Gen Z is underrepresented and are accounting for that already.
In other words, don't let Gen Z underrepresentation in the polls lull you into a false sense of security. The polls are accurate. The race is neck and neck.
Another problem with polls is that the poll takers were off the mark in both of the last two elections, generally towards the Democratic side. So some of them compensate not by modifying their methodology, but by goosing the numbers by the same amount in the other direction this time around. They might say "Hey, we underestimated the guy by 2% in this state last time, so let's give him a 2% mulligan".
If you know polling is an inexact science, and you were wrong consistently in one direction twice in a row, it is better for your reputation if you are off in the other direction this time.
I personally lie to pollsters because ALL polls are push polls. I can’t be the only one who feels this way. So tired of being a useful idiot, so I try to be less useful.
The #1 pain point for pollsters is the prediction of the election demographics.
Polls and statistics are such that a general simple random sample has too little power / weak error bounds (I'm talking like +/-10%, nearly useless).
The easiest way to improve your error bounds is to make assumptions about the electorate makeup. IE: if you know the election will be 50% male and 50% female, you can poll 50 men and 50 women (rather than 100 random people, which might end up as 60 men and 40 women due to randomness).
Lather rinse repeat for other groupings (Latino, Asians, black, 18Y olds, 55Y olds, Rural, Urban, etc. etc.) and you get the gist of how this all works.
Alas: the male / female vote this year is completely worked because abortion is on the ballot. All pollsters know this. Their numbers are crap because the methodology is crap this year. It's impossible to predict women turnout.
Not entirely. A few months ago AOC was discussing how her own internal polling of her own district ended up under-estimating her support by around 10 percentage points. It was in that hour long talk she gave explaining why she was still supporting Biden as the candidate, before he dropped out.
Polling has always been tricky, but I think in the past decade its gotten nigh-impossible. These institutions now seem to be more focused on not losing their jobs than actually trying to gauge support for a politician.
Makes me wonder if issue polling instead of politician polling is better. I imagine it probably is a little bit, but I'm not sure.