Sunday, June 04, 2006

 

Why the Robert F. Kennedy, Jr. article alleging the election was stolen is substantially right and the critics substantially wrong. Part 4

6. To back up just a bit, Manjoo: It's worth noting, too, that a team of political scientists hired by the Democratic Party to investigate what happened in Ohio also used statistical analysis to search for any pattern of obvious shifts from Bush to Gore in the vote count. True, though the report reminds one a bit of George Bush's comment, "Of course it's a budget! There are a lot of numbers!" The problem with figures like Graph 35 is that they tell one what one knows: Democratic precincts tend to vote Democratic. But one should examine them more closely. There are anomalies. Here's just one, as an example: Punchcards in Cuhayoga (graph 35) have a strange y-intercept, one very different from every other figure in that graph. In words, what the intercept says is that in a precinct where not one voter would vote for Eric Fingerhut, 20% of voters would still vote for John Kerry. But that's likely not what it means. You see, that intercept is also strongly influenced by what happens at the other end of the line. Another interpretation could be that in heavily Democratic precincts, John Kerry's vote was unusually low. The slope of the line seems consistent with that interpretation. But since the authors don't give us the figures or get into the weeds of the analysis, we'd never know. Where this failure to get into the weeds matters is in the Connally-Moyer vote. Connally did remarkably well in certain rural counties of Ohio, given that she was not from the area, was African American, and that her opponent was well-known and part of the machine. There are two narratives. In the Kennedy narrative, the high vote for Connally represents votes being stolen from Kerry and Fingerhut. In the DNC narrative, the Fingerhut and Kerry votes track one another and since it's unlikely that Republicans would bother to steal votes from guaranteed-loser Fingerhut, it's also unlikely that votes would have been stolen from Kerry. But for the DNC group to reach that conclusion, they should have tried to run the Kennedy scenario through their statistics. In other words, take the data, adjust it for a shift of several percentage points away from Kerry in the counties where Kennedy thought there was fraud, and show that this could be detected by their methods. My thumb in the wind estimate says that the Kennedy narrative could be directly disproven, but the DNC authors haven't done so. One gets this sense from the report, that it's being written by people who know what the answer is, so they don't have to look too closely at the data. It would have been nice if the DNC had brought in on the panel some people to serve as junkyard dogs, in the phrase of the Reagan era: people who would challenge the conclusions vigorously. 7. Manjoo: Claim: Exit polls are usually accurate.... Reality: "Nonsense," says Mark Blumenthal It's certainly true that the claims for exit poll accuracy are excessive. But there are some serious problems with Blumenthal's argument. First and most important, there used to be a very good explanation for why exit polls overpredicted the Democratic vote: massive spoilage of votes concentrated in Democratic districts. The spoilage rate has been declining. So, exit polls ought to have been getting better and better. Second, flaws in the sampling process are well-known. So, the exit pollster whose reputation is riding on all this, Warren Mitofsky, should have been fixing them. 8. Manjoo: Kennedy is right that the polls in battleground states showed Kerry ahead. What he fails to say is that in many states, the exits didn't show Kerry ahead by the margin of error, meaning, statistically, that his lead wasn't secure. Well, sure. And if the odds are two-thirds that I'll get pulled over for speeding in Maine, and two-thirds that I'll get pulled over in New Hampshire, and two-thirds in Massachusetts, and so on to North Carolina and yet I don't get pulled over at all, that's surprising. It's from the combination of long odds Freeman's estimate of the odds comes from. It's not a strawman argument. 9. Manjoo: As I reported last year, Mitofsky has outlined a clear and convincing explanation for what went wrong with his survey. According to Mitofsky, interviewers assigned to talk to voters as they left the polls appeared to be slightly more inclined to seek out Kerry voters than Bush voters. Kerry voters were thus overrepresented in the poll by a small margin. This argument persuaded some people, not others. The main problem was that the prediction it makes for very partisan precincts are wrong. Mitofsky also managed to drive suspicion through the roof by his secrecy. He failed to make clear how he adjusts precinct weights post-election. He also failed to make his data fully available (with appropriate privacy protections for respondents). The academics in this debate ought to seriously consider how they would view a colleague who did similar things. And Manjoo is just whistling down the commode when he says that it's "irrelevant" that there was a lower response rate in Democratic precincts than in Republican precincts. Of course it's relevant when a hypothesis predicts behavior which is not observed. Nor has common sense intruded here. Democratic voters were called and threatened with jail if they voted. Republican voters were not. Who is more likely to be a "reluctant responder?" Why has no one gone out and talked with voters to see who is reluctant to respond and why? What are the statistical odds of the low response rate in Democratic precincts being due to chance? The DNC report should have done so. I'll be the first to say that the exit polls are not absolutely conclusive. One of the weaknesses of Freeman's case is how the definition of "battleground" was arrived at. But the exit polls have oddities which need to be researched. No one, as far as I know, has even bothered to adjust the exit polls vs. actual vote to reflect spoilage. No one has looked outside of Ohio to see if Mitofsky's explanation holds elsewhere. Why are people so incurious? Could it be that they have made up their minds and are simply turning their scientific training to support their opinions? This ends part 4, to be continued.
Comments:
"And if the odds are two-thirds that I'll get pulled over for speeding in Maine, and two-thirds that I'll get pulled over in New Hampshire, and two-thirds in Massachusetts, and so on to North Carolina and yet I don't get pulled over at all, that's surprising. It's from the combination of long odds Freeman's estimate of the odds comes from. "

One thing about this statement. It's not a good analogy with regard to what Manjoo was saying in that tidbit. The thing is, in your analogy, driving through each of those states would be independent of each other, so looking at any combination of "long odds" or whatever would not be a vaild way to look at it. As far as what Freeman is saying, he is suggesting that these incidents are not independent... so the speeding analogy doesn't quite fit.
 
Anonymous, thanks for your post.

However, your take is not correct. Freeman does treat the states independently. Speaking of FL, PA, and OH, he says in the 12/29/04 version:

Assuming independent state polls with no systematic bias, the odds against any two of these statistical anomalies occurring together are more than 5,000:1 (five times more improbable than ten straight heads from a fair coin). The odds against all three occurring together are 662,000-to one.

That's pretty clear that he's treating these as independent events, isn't it?
 
Post a Comment

<< Home

This page is powered by Blogger. Isn't yours?

More blogs about politics.
Technorati Blog Finder