Saturday, August 11, 2007

Less is more in some vote recounts

The 2004 Florida election debacle is only one of a number of recent races in which the razor-thin result was slimmer than the margin of error.

But a vote-by-vote recount is a somewhat dubious proposition. How could one substantially lower the margin of error in the recount? The more votes that are counted, the greater the likelihood of error.

I recall that media firms hired a polling expert to redo the Florida count. Apparently he took measures to lower the error rate, but I am unsure by how much.

So a thought occurs: Why not simply do a random sample when recounting? There is always a specific number of sample elements that will do for a specific overall number of votes to get the result within a 99 percent (or greater) confidence level.

That sample number is always far lower than the population size. So -- as long as care was taken to assure true randomness in the sample -- one would expect that the error in counting votes in the sample would be much lower than in counting votes in the population. So that error, plus the error implicit in the normal curve, might well be kept lower than the error in a recount. That is, the sample result would, in these special cases, give the winner with more confidence than with a full recount.

Will states, or some election judge, authorize such a procedure in a very tight race? It would be a tough sell politically, since the public is likely to believe that a full recount would be more accurate in exceptionally tight races than a random sample result.

No comments: