Exit Poll Paper rejected by Political Science Journal

My paper on exit polls was rejected by the first peer review. Primarily due to the bias against using exit polls for election accuracy by academic researchers. Because they don’t trust exit polls as a measure of vote count accuracy, they don’t accept my conclusion. This is not surprising – I had journal editors who would reject my paper based on that alone without even considering sending it to reviewers. The journal I finally sent it to was the first one that was willing to at least consider it.

Because this is a controversial conclusion, I needed to include a lot of back up information justifying the conclusion. My original paper was over 8000 words, more than twice their limit. It was sent back immediately and without any reviews based on that and another minor bookkeeping error I had made. I corrected the bookkeeping error and drastically revised the paper, cutting it down to a bit over 5000 words. I am grateful they were willing to assign reviewers to it, but most of the valid criticisms were due my having cut so much of the supporting documentation out of the revision they got. This is conundrum I have not resolved yet. If I include all of the analyses they wanted to see, I will have a paper longer than peer-reviewed journal publishers are generally willing to accept.

Here is what the reviewers said and my responses to them. The reviewers comments are in italics:

Reviewer: 1

Comments to the Author

“In the absence of any deliberate manipulation of results, the difference in vote share between the official count and an exit poll (e) will be randomly distributed around zero and relatively small” (p. 6).

That’s a crazy assumption. Significant discrepancy between an official tabulation and a poll estimating the same quantity from a sample of the relevant population can follow from: (a) deliberate manipulation of the tabulation; (b) inadvertent error in the tabulation; (c) inadvertent misrepresentation of behavior by poll respondents; (d) deliberate misrepresentation of behavior by poll respondents; (e) systematic differences between those willing and those unwilling to respond to the survey, in the (almost universal) case where the survey does not cover the whole population; (f) error by the analysts comparing the poll and official results. These authors implicitly assume that (b) through (f) are negligible, which is, frankly, ludicrous.

My response:

My first draft paper presented detailed responses to these alternate explanations – the “Liars, Idiots and Introverts” section covered most of them, with some additional paragraphs scattered in other sections. I toned the section down and then ended up cutting it completely in the version he/she saw. I’ll put it back in, at least the toned down version, before I send it off again.

A priori, (a) is an unlikely culprit most of the time if only because falsifying election results is usually a felony and those rigging the outcomes would be taking large risks. That point is certainly not an “impossibility theorem,” and there are surely some cases of deliberate fraud. But (a) is not a natural first suspect, and pollsters have long warned consumers of polling data not to exaggerate the accuracy of exit (or, indeed, other) polls. See, eg, https://fivethirtyeight.com/features/ten-reasons-why-you-should-ignore-exit/.

My response:

This seems to be a serious bias against using exit poll results as a method of verifying election accuracy. There are certainly limitations to such data particularly when put to use in predicting outcomes or general trends. That was not the aim of this one. I designed this particular exit poll is not a standard design of opinion polls, but a standard design for auditing process results and isolating a problem area to assess the size of the problem. The accuracy of my calculations are based on well understood statistical methods and appropriate for the data.

I find it interesting that there is strong antipathy to using exit poll results for verifying accuracy of election results by political science academics. I have seen no solid reason for the disdain, but it has been common in my queries to editors about whether they would consider the paper at all.

One consequence of adopting this stance regarding exit poll results is that it closes off the only legitimate avenue for voters to assess the accuracy of their precinct results. Voters cannot provide sufficient evidence for academics to take their concerns seriously and start doing something to put out the fire in the theater. Any vocalized suspicion of voting machines being rigged is dismissed as tinfoil hat territory as this reviewer just did above.

I am also troubled by the idea that falsifying election results can be dismissed because it’s illegal and heavily penalized if caught. That’s like declaring a death as being more likely due to natural causes just because murder is rare, illegal and heavily penalized when caught. It does illustrate the difficulty in getting this type of controversial hypothesis through the peer-review system.

I am instructed to limit my comments to 500 words so I’ll raise only a few more specific worries.

Responding to an uncited quotation by Thomas König (probably in an APSR rejection letter) the authors claim that an un-refereed e-book proves that discrepancy between exit polls and official results are fraud. Bollocks.

My response:

This is because I included Jonathon Simon’s “Code Red” book as a reference. There are certainly reasons to be suspicious of a self-published book of this nature. On the other hand, I have the appropriate qualifications to perform a peer review on this book. I’ve read and found the data convincing. This reviewer has scoffed at the publishing venue while demonstrating how difficult it is for such a hypothesis to make it through the peer review process.

What I was trying to express with that citation was the justification for considering the hypothesis at all and why I set up exit polls to evaluate it independently. That the hypothesis that our voting machines are being manipulated is not crazy or ludicrous, but a legitimate concern to voters. I don’t understand why academics in the field of political science are unwilling to give the hypothesis serious consideration, but they don’t appear to be willing to entertain the notion that our voting machines are being rigged with anything but ridicule.

– A second cursory argument against the polls being wrong comes up following the main analysis, when the authors claim that the discrepancies are not correlated with registration skew, but provide not clear details (this is few lines on “Corollary 1” at the top of page 10).

My response:

This section originally had a table and chart and more detail, but I removed it trying to cut it down to an acceptable length. I’ll consider putting this back in, but maybe in an appendix to keep the paper length down.

– Finally comes an ANOVA which, for the first time, acknowledges that the poll covered more races than the five justice-recall events. Insofar as systematic differences between responders and non-responders drive poll-tabulation discrepancies, differences across contests might be informative, so it is wise to use all of the information available and not analyze contests in isolation. How informative are the multiple contests depends in part on how strongly correlated are the voting patterns across contest. Generally, a serious analysis of survey-result discrepancies should make use of all of the information at hand. If the authors believe that only the KS SC contests were rigged (unnecessarily as it turns out!?), they can say so explicitly and make better use of the other races. The analysis done through page 9 in this manuscript ignores the other contests and is, accordingly, of limited value.

My response:

In the final submitted version, I took out the analyses of those races. They provide more support for the rigged hypothesis, with four additional anomalies, only one of which has a reasonable alternative explanation. Thinking like a mathematician, since the Judge Races were sufficient to prove deliberate manipulation, the other races did not need to be explicitly covered. For a mathematician this is logical. But based on this comment, that decision was not a good choice.

– “The relatively low response rate for provisional ballots and relatively high rate for the scanned paper ballots at the Urban Wichita and Sumner County sites indicate that some provisional voters mistakenly marked that they had used a paper ballot.” Sounds reasonable, but this is not a minor detail, but, rather an important indication that option (c) above (inadvertent errors in describing behavior by poll respondents) is in evidence. Poll respondents who mistakenly misreport their method of voting can, equally, misreport how they voted.

My response:

Actually, no, I would disagree that they can equally misreport how they voted. Being mistaken about which type of paper ballot was used is an understandable and easy error for people to make. If the provisional voters have differences in their choices (a question that has not been answered), it will lead to bias in results if they are included.

Mistakenly answering Yes versus No can be expected to be a considerably less frequent error. It’s also reasonable to assume that inadvertent errors of that nature will be randomly distributed among all responses, so such errors – as I explicitly assumed and this reviewer rejected as “crazy” – will be randomly distributed around zero and relatively small. Thus, no bias is expected from such errors. This is an important distinction between the two types of inadvertent errors, since bias is what we are attempting to measure and then attribute to machine rigging. I’m not sure how to revise the paper to deal with this criticism. I will have to give it considerable thought.

– Figure 5 shows a statistically significant Democratic effect. The authors briefly assert that, “the potential sampling bias is not large enough to explain the SCJ results” (p. 11) without providing a better sense of magnitude.

My response:

This is a valid criticism I can use to improve the paper. To sense the magnitude of the difference, readers would have to compare the appropriate values from two different tables. I will add an explicit statement of the difference in magnitude and additional points or lines to the graphs to make this clear.

– What should be compared is distributions because all of the races feature multiple options, and an analysis of only a scalar (Republican vote share or Democratic vote share or “Yes” share) is incomplete. This is a simple point the author correctly broach, and that the ANOVA acknowledges (though with collapsing of abstention of “other” voting). However, this key point is ignored in their main work, as when they dwell on the yes votes for the KSC justice retention races on pages 7-9. Voting options in the SC contests were not 2 (Yes or No), but, rather, 3 (Yes, No, and Abstain/Leave Blank). These authors drop the item non-responses as though they are random and ignore the important blank/abstain shares in the main analysis, and neither move is wise.

My response:

This is a valid criticism of the analysis technique used for the judges; it’s the sort of technical nit to pick that academics like to argue over but has no impact on the results. I actually ran both types analyses and the results were very similar. I didn’t want to include both and deliberated for a while on which one to include. I ended up choosing to go with the two responses, yes and no, rather than including the non-responses as a separate category because it’s a simpler cleaner analysis for non-statisticians to understand. I’ll reconsider which to include, but even if I switch to the other type, it only changes the numbers in some tables and bullet points. Using the other approach doesn’t change the conclusions at all.

– What jumps out of Figure 3 is that the Sumner results appear to be different in kind from the other 6, insofar as all judges’ yes vote shares in the exit poll exceeded their official yes shares. The obvious candidate for an explanation is not that the cheating was of a distinct form in Sumner, but that the people implementing the exit poll forgot to include a “left it blank” option on the survey form. Apparently, more of the poll respondents who did abstain chose the wrong answer “Yes” than the wrong answer “No”, rather than leaving the poll question blank too. That pattern was un-hypothesized and is mere surmise on my part. The details of the survey matter a great deal and the blunder of omitting one of the options changed the conclusion greatly. It was a useful mistake! In that light, it is startling that the authors are so relaxed about alleging fraud based on the survey-versus-tabulation comparison, on the premise that the other surveys cannot possibly been biased or wrong in any way.

My response:

I have a definite defensive reaction to calling this difference a blunder or mistake. It was a deliberate decision made by the manager of that exit poll due to space considerations. He also included questions about other races unique to his location so space was at more of a premium for his survey form than others with fewer questions. I disagree that this difference would cause the difference seen in Sumner county, but cannot prove it. However, even if this data was dropped from the results analysis, I would still have better than 99% confidence the other sites show signs of voting machine rigging. Accepting this concern as legitimate and removing this data from the analysis would not alter the conclusions of the paper.

– Bishop, in POQ in the 90s, reported an experiment to show that it seemed to matter if exit polls were done as face-to-face interviews or as self-reported surveys. Arguably, that article showed that merely emphasizing secrecy by marking the form “Secret” helped improve response rate and accuracy (the latter is not fully clear and would not be the right conclusion in the world where tabulations are all suspect, inhabited by these authors). It would be helpful to know exactly how the volunteers who administered these exit polls obtained the answers.

My response:

Details about how the survey was conducted was included in an earlier version of this paper, but in my efforts to reduce it to an acceptable size, I may have cut the section describing how the survey was conducted too much. I can make this clearer by putting back my original longer write up for this section.

– “In order to drive down the size of the error that could be detected, we needed the sample size to be as large as possible relative to the number of votes cast.” Misleading. The effect of getting high coverage are conditional on no error in responses (deliberate in the case of socially desirable answers, inadvertent in the case of busy/distracted/annoyed respondents participating with reluctance or hurrying, etc.). And, of course, non-response is critically important, because contact is not measurement. One might use these data to compute how different the non-responders would have to be from the responders for the official data to be correct under and assumption of no-error-in-poll-responses. That would be a bit interesting, but the assumption would be hard to justify, and I’d still be disinclined to jump on the fraud bandwagon. My strong suspicion is that admitting to all of the possible sources of discrepancy will make it clear that it is not wildly implausible that those who wouldn’t talk to pollsters were a bit different from those who would and that alone could generate the gaps that so excite the authors.

My response:

I actually did compute how different the non-responders would have to be as this reviewer suggested. The results were not particularly illuminating to me and I did not include it in the paper. However, this reviewer is not the only person to have suggested it so I will include it in the next revision.

Reviewer: 2

Comments to the Author
The manuscript looks at a recent vote in Kansas and argues that the machine counted votes were manipulated.

The authors state that the best way to verify a vote count when one has no access to the voting records is an exit poll. I have two problems with this. First, exit polls have many problems and are usually regarded as less reliable than normal random sample surveys.

My response:

This seems to be a serious bias against using exit poll results as a method of verifying election accuracy. Exit polls as standardly done in the US do have many shortcomings with regard to being used to detect fraud. That was why I did not use a standard method but designed my own approach based on my statistical and quality engineering background.

Second, many alternative approaches exist that have not been discussed and build on work by Mebane, Deckert, Beber and Scacco, Leemann and Bochsler, and many others. These approaches often but not exclusively rely on Bedford’s law and test empirical implications of specific forms of fraud.

My response:

This is useful to me as it provides direction for further research. My suspicion is that the alternative approaches being referenced would require resources that are not available to me, but I will look into these authors to see if I can learn something that will improve my paper and future research.

The argument that the common tendency found in the US has to be attributed to fraud is baseless. The authors only provide one citation from a non-peer-reviewed source and do not provide other arguments to justify this claim.

My response:

This is referring to the book “Code Red” also denigrated by the first reviewer. What I was trying to express with that citation was the justification for considering the hypothesis my exit polls were set up to evaluate. I will definitely have to revise this section.

The discussion of stratified vs clustered sampling is unclear and the conclusion is confusing. The authors also mention that participation rates were excellent without providing the rates.

My response:

The discussion of stratified vs cluster sampling and the participation rates was a section that suffered from cuts made to reduce the length.

The main problem is that the authors assume the exit poll to provide something like a true benchmark.

My response:

This seems to be a serious bias against using exit poll results as a method of verifying election accuracy. These Exit polls provided the citizens of southeast Kansas with the best possible benchmark obtainable given the voting equipment and processes used in Kansas. That this benchmark is not acceptable to academics should imply that the voting equipment and processes used are unacceptable given that no other means of verifying the vote count is available to citizens.

But it is one of the fundamental lessons of social science polling that participation is correlated with socio-economic background variables. Education and survey participation are usually positively correlated and have to be corrected. Since these factors are also correlated with vote choice there is no reason to expect the exit polls in their raw form to be anywhere close to the truth.

My response:

This criticism is based on the lack of understanding of the cluster versus stratified sampling approach and the details of how the comparison was made. Using the cluster sampling approach and comparing the results by precinct and voting equipment allow us to dispense with the complicated process of adjusting the exit poll results by such factors as education level or income or race which is required when using stratified samples. The use of that technique allows for better predictions for the larger population and a more in-depth analysis of who’s voting for whom, but makes the direct comparison for the purpose of assessing the accuracy of the voting machines untenable.

In fact, Republicans are less likely to participate. The supplied ANOVA test for some implications is unclear and I would want to just see the plain participation rates correlated across all districts controlling for average education. But the non-response bias would perfectly line up with all findings and this is then the entire ball game.

My response:

Definitely need to rewrite the ANOVA analysis section. The ANOVA test showed a bias in responses against Libertarians and for Democrats but Republicans overall had bias either direction. Whether that bias is due to inherent characteristics of the party members or by rigging by the machines can be debated. The ANOVA showed that the size of that bias wasn’t enough to account for the discrepancies observed in the Supreme Court Judges and that was after assigning the democrat bias to the 4 Republican justices that Brownback opposed.

I can agree that it was be interesting to run the correlation suggested, the average education of voters by precinct is not available for Kansas precincts and I do not have the resources to compile such a database, even if only for a few counties in Kansas.

I feel that it is extremely difficult to successfully argue along these lines. The authors fall short of being fully convincing on key points. Given this and so probable alternative explanation, I feel that this paper does not achieve of what it sets out to do.

My response:

This is the bottom line for both reviewers really. I wasn’t convincing enough. I’ll have to try again.

I’m not sure what the best approach would be and appreciate any suggestions my readers have. Should I continue to try to limit the length of the paper to make it acceptable for academic publications or should I include all the information and analyses results that reviewers and readers could reasonably want to see but that inflates the length to beyond what peer-reviewed publication will consider?

If I go with the longer paper, I am basically limited to self-publishing without peer-review which is then easily dismissed by anyone who disputes the conclusions, such as the Kansas Secretary of State’s Office.

Should I seek publication in another field or a general scientific research publication? It’s less likely to get any notice by people with the ability to change our voting processes if I do that, but it would be available as a peer-reviewed reference to people who are trying to make improvements and eliminate paperless voting machines and institute auditing procedures when machines are used.

These are the questions I will ponder as I revise my paper again.

WSU statistician: New voting machines ‘step in the right direction’

Some news coverage of my efforts:

Clarkson said the only way to be absolutely sure of the count is to do it by hand and do it in public with observers.

While she acknowledges that kind of counting would take longer, she said it was done that way for decades before computers came on the scene and is still a common practice in European democracies.

In fact, the Netherlands announced last month that it would use hand-marked and hand-counted votes for its legislative elections, to prevent possible hacking by Russian government operatives.


A Tale of Toms, Dicks, and Harrys

For new readers, I am discussing the analysis of Exit Poll Results for S.E. Kansas in 2016.  These exit polls were performed because official audits are not done and independent audits are not permitted.  It is therefore the best information that I, as a voter in Kansas, have been able to obtain to make an independent assessment of the accuracy of their official vote totals as certified by our elected officials.

My assessment is our voting equipment was rigged!  Not by enough to sway the outcome of the elections I had adequate data on, but the official results showed differences from our exit poll results by site that were too consistently in one  direction to be anything other than deliberate.

I recently received a harsh but relatively accurate and detailed review of my exit poll paper.  The overall judgement was that I need to take the passion and conviction out of my conclusions in order to make it acceptable for a peer reviewed publication.  I must admit, I have allowed my passion to seep through, which is generally frowned upon in scientific papers.  I will rewrite it.

It is difficult to be dispassionate because this is very personal for me.  I was born in Wichita Kansas.  I ran the exit poll booth at my personal voting station, as did all of the site managers.  We are not doing this for fun and glory; I was able to get the volunteers needed to run five locations because enough people agreed with me regarding my concern about the accuracy of our official counts and were willing to spend the time and energy necessary to accomplish it.  If I want to see these results published in an academic journal, and I do, I have to tone down the anger that has bled into my writing about it.

The “Liars, Idiots and Introverts” section received some particularly scathing comments.  The reviewer found it offensive.  I’m not surprised, it was designed to be offensive.  But he/she’s right that it’s inappropriate for an academic journal.  This section was written when I was feeling particularly frustrated by people disregarding my results by claiming sampling bias – i.e. people choosing not to participate or deliberately answering incorrectly.

There is also a possibility, that I did not discuss, of inadvertent error in ballot design causing a significant number of voters to err in the same way.  The Butterfly Ballot used in Florida during the 2000 election is an example of this type of error.

If I want to convince other people that other explanations are insufficient to explain the discrepancies, I need to do a better job of it.  In the end, it is a subjective evaluation of the relative probabilities of the possible explanations.  I cannot prove which one is correct.  No one can.  I think the probability of these results being due to sampling bias is too low to sway my assessment that our voting machines are rigged.  Here is another attempt to communicate my reasoning as to why I feel that way.

The Tale of Toms, Dicks and Harrys.

Tom is my name for folks that voted for Trump, but didn’t want the family member or neighbor, who was filling out their own survey standing next to him/her, to know that.  Or maybe Tom is a trickster who delights in giving pollsters wrong answers.  Or maybe he/she just dislikes taking exit polls.  To see the results we did, we had many Toms that either lied to us about it, claiming Hillary instead , or just refused to fill out our survey.

Dick is my name for folks that voted for the Libertarian Candidate in the Senate and 4th Congressional Races, but didn’t want the family member or neighbor, who was filling out their own survey standing next to him/her, to know that.  Or maybe Dick is a trickster who delights in giving pollsters wrong answers.  Or maybe he/she just dislikes exit polls.  Dick either lied to us about it or refused to fill out our survey.

Harry is my name for folks that voted for the Miranda Allen, independent candidate in the 4th Congressional Race on a voting machine but didn’t want family member or neighbor, who was filling out their own survey standing next to him/her, to know that.  Or maybe Harry is a trickster who delights in giving pollsters wrong answers.  Or maybe he/she just dislikes taking exit polls.  Harry either lied to us about it or refused to fill out our survey.

Finally, we get to the judges.  We need two more sets of voters to explain the results for the judges as due to sampling bias.  I’ll call them Johns  and Joans.  Johns voted against all the judges while lying or refusing to take our survey and live in Wichita and Winfield but not Wellington.  Joans only live in Wellington and wanted to keep the judges while lying or refusing to take our survey.   Apparently, we have nearly twice as many Johns and Joans as we have Toms Dicks and Harrys.

What is the probability that all the Toms, Dicks, Harrys, Johns and Joans in S.E. Kansas are responsible for the bias in our exit poll results, rather than deliberate machine manipulation or rigging of the machines?  This is a valid question to ask.  We can examine our data and see which explanation is a better fit to the data.

There’s a stereotype of Libertarians as anti-social jerks. If this were accurate, it might be a reasonable alternative for the Libertarian results – a lot of Libertarians are Dicks.  On the other hand, how likely is that in Southeast Kansas, home of Koch Industries, that a few libertarians independently (or possibly even in cahoots) successfully hacked the voting machines here?

Why is Wellington devoid of Tom’s?

Why are Harrys found only in Sedgwick County and why do they disdain the use of Paper Ballots?  Is it more likely that a statistically significant percentage of Miranda Allen’s voters in Sedgwick County, but not Sumner or Cowley are Harrys? This pattern does fit the explanation of a “butterfly ballot” type problem, as it shows up in only one county and on only one type of voting equipment. It is possible that Sedgwick County officials inadvertently programmed the voting machines to somehow cause voters to accidentally indicate Miranda Allan rather than leaving the 4th congressional district race blank as they reported to us.  Or maybe Miranda Allen has a fan in Wichita possessing the wherewithal to successfully hack the voting machines in Sedgwick County?

Now consider the relative probabilities of the two alternative hypotheses.  What is the probability of all the Toms, Dicks, Harrys, Johns and Joans existing in the numbers required to produce the discrepancies we found in our survey results versus the probability that some nefarious and technically competent people were able to access voting equipment or software and made unauthorized changes to the software?

Here’s a recent opinion piece on that topic published in  Scientific American.  “Nevertheless, it has become clear that our voting system is vulnerable to attack by foreign powers, criminal groups, campaigns and even motivated amateurs.”

I will say that the probability that Libertarians have more than their fair share of Dicks everywhere is harder for me to reject than the existence of all the Johns and Joans.  But accepting that as a viable explanation also embodies some assumptions about the character of Libertarians I am loath to accept.

Until I see evidence that Libertarians actually have these traits in greater numbers, I assume that the tricksters, introverts and idiots are randomly distributed among the various political parties.  Other people can and do differ in their willingness to accept that assumption.

Now, this Tom Dick and Harry story won’t go into my paper.  It’s not stuffy enough for an academic journal.  I found writing it to be helpful in getting to a concise statement of why I feel sampling bias doesn’t work as a reasonable explanation for the exit poll results.  I hope my readers find it helpful in understanding my thinking as well.

In fact, writing it has allowed me to add the last bullet to the set of arguments I’m working on for my revised paper.  In Bullet form, here are my reasons for concluding that my exit poll results prove deliberate fraud and that sampling bias and inadvertent ballot or survey errors are not sufficient to explain the data.


Update on Exit Poll Results

On Feb 11th, I spoke with the Women for Kansas Cowley County (W4K-CC) Meeting.  We discussed the results of the exit poll they had run on Nov. 8th.

I discovered that the Cowley County Paper Ballot Official Results are not a apples-to-apples comparison as they are in Sedgwick County.  Those results are not suitable for inclusion in my analysis.

They are not the only dataset found to be unsuitable for inclusion.  I have removed that dataset from my upcoming peer-reviewed publication.   I have decided to leave my original blog post unchanged while updating my post discussing excluded data.

I understand why people don’t pay attention to statistics.  They can easily be twisted to yield any result desired by management.  That happened in Flint Michigan.

On the other hand, there are legitimate reasons to eliminate data when it is found to be unreliable.  The Cowley County are such an instance. The numbers given include main in ballots cast in those precincts.

Another reason I have chosen to leave the original graphs up is that they nice demonstrate the difference in pattern between a randomly introduced source of variation and a consistent bias which is evidence of fraud.

Cowley County results had me scratching my head.  The machine results showed trends similar to Wichita.  The paper ballots showed only large errors, but benefiting a random scattering amongst the candidates and races.  If you are interested in this sort of analytical details, feel free to go through the charts and decide for yourself.  I can’t rule out fraud for that dataset, but I don’t know what caused the deviations.  If it was fraud, it was either mercenary selling votes to any candidate or multiple agents working at cross purposes purhaps?  But given that the data collection limitations imposes greater variability which would result in the pattern of errors we see in those graphs, fraud is not be the most probable cause for those deviations.

Datasets are sometimes tainted by problems that have nothing to do with the question being asked but due solely to constraints on the data available.  There are limitations imposed by the methods of both the official results  and the exit poll survey.  I’m publishing ALL of the raw data, as well as as detailing what data is excluded and why.  Anyone who cares to may look at what is being left out as well as decide for themselves if the reasoning for the exclusion is sound.  With the exception of the Cowley County data, the other excluded datasets tend to support the fraud hypothesis.

How can you be sure that the voting machines in southeast Kansas were rigged?

How can I be so sure? Couldn’t there be some other cause of the bias?  That was the most common inquiry at my presentation Saturday, when I explained my exit poll results to the people who helped collect the data and had a vested interest in understanding the results.  I may have come across as a bit defensive in regard to this question.  I’m sorry if I did.  It’s hard to articulate the depth of my certainty, but I’ll try.

I carefully set up these exit polls to compare the official vote count by machine type.  The only legitimate concern regarding the meaning of these results is a biased sample. Not everybody tells the truth.  Some people delight in giving false answers to surveys.  How are you going to account for that? It’s a fair concern.

While I cannot prove that didn’t happen (at least, not without access to the ballots, which isn’t permitted), this is part of the normal error I expect.  It always helps to state assumptions explicitly.

INTROVERTS, LIARS, AND IDIOTS ASSUMPTION : THESE TRAITS ARE RANDOMLY DISTRIBUTED AMONG ALL CANDIDATES AND POLITICAL PARTIES.  I am assuming that that people who were less likely to participate (introverts) or more likely to fudge their answer (liars) or make mistakes (idiots) in filling out the survey did not differ in their response to our exit poll.

I received the following email that sums up this concern nicely and also suggests a couple of ways to check that hypothesis.

Hi Beth,

The observed discrepancies between official results and your poll results very clearly show that Clinton (D) voters were more strongly represented in those polled than in the official vote count; Trump (R) voters were less well represented.  There are  many possible explanations for this discrepancy.  One hypothesis is that a certain percentage of voters “held their nose and voted for X”  and would never have participated in the poll.  If these voters tended to be more of one party than the other, than that party would be less represented in the polls.   

Fortunately, your data provide a means to test this hypothesis about the “missing minority”, for it leads to this prediction:  
If a “missing minority” was biased towards X, then sites at which X had a greater percentage of the votes would be least affected by vote disparities.

A corollary prediction:  sites having the highest response rate would be least affected by vote disparities.

Have at it!

The main reason I find this hypothesis implausible is that the discrepancies for the Supreme Court judges were twice as large and followed the same pattern as the Pres. race discrepancies. There’s no reason to think more people ‘held their nose’ for judges than president!

Regarding those two predictions:

  1.  The sites with the greatest discrepancies were machine counts for SE Wichita, Urban  Wichita and Cowley.  The sites with the highest %Trump voters were Cowley, SW Wichita and Sumner.  No correlation there.
  2. The site with the lowest response rate, Sumner with 25%, also had the lowest discrepancies between the exit poll and the official results for the Pres. race.

In short, we do not see the other data relationships we would expect if the introverts liars and idiots assumption were false.  There is no reason to assume these individuals were more likely to vote for one candidate than another resulting in the bias in our data.