A Day at the Legislature

Friday, I took vacation time from work and spent the day listening and testifying at the Oct 27th meeting of the Joint committee on Ethics and Election in Topeka. I want to thank all the people who have donated to the Show Me The Votes Foundation. While the travel expenses are small, the knowledge that I don’t have to take on the costs personally and that I am speaking for others as well as myself provides significant motivation for me to do this.

The majority of the day was scheduled with people from non-profit agencies talking about the joys of Ranked Choice Voting. They were all very professional and did a decent job of covering the cons as well as the pros. I actually found it all fairly interesting. It turns out that one of the unintended consequences of ranked choice voting was a decrease in negative campaigning because insulting an opponent supported by a voter doesn’t incline that voter to making you their second choice. A big plus for the method in my opinion, but that opinion was not shared by all of the committee members.

One senator was confused by a technical detail regarding a theoretical situation that could result in a sub-optimal outcome (one of the cons presented). I don’t think the presenter quite managed to understand his confusion well enough to alleviate it. My take on it was that the senator didn’t realize that in the ranked choice system, a guy who is ranked 3rd is presumed to beat the guy ranked 6th if they were in a head-to-head match-up. This is a reasonable assumption in ranked choice voting because we are referring to the choices of an individual voter. Election outcomes are well established as potentially intransitive in 3-way versus head-to-head matchups, so it was not an unreasonable question for the senator to pose. (Intransitive relations are situations where A > B and B > C does NOT imply A > C)

Understanding and illuminating those underlying assumptions can be difficult. This case involved a deeply buried mathematical assumption about the elemental structure of the data and the answer that justifies it relates to a technical detail regarding the data collection in this context. I have been on both sides of such mathematical confusion, so I could sympathize with both. It’s not easy to identify the buried assumption that isn’t shared.

There were two other Kansans, both with math backgrounds, who testified about ranked choice voting. I was nominally in favor of it. Ranked choice voting is a method with a higher probability of providing a government representative of the majority will of the voters. It’s also more complicated to compute and may require substantially more time to arrive at a winner. As far as I’m concerned, it’s putting lipstick on a pig if they don’t address the elephant the room regarding voting machines, which is what I was there to tell them about.

I told the committee flat out that my research, currently under peer review, show that our machines are being manipulated and they needed to do something about that. I could be proved wrong with an audit, except … no audits allowed.

I complained about the fact that in Sedgwick County we have a brand new expensive voting machine system with a paper trail. Our election officials insist that without a legislative solution, those ballots may never be opened and reviewed by human eyes to verify the accuracy of the count. Which is pretty much what the appeals court judge told me back in September when I asked what voters could do to hold our officials accountable. I think I made it clear to the committee that the current situation was unacceptable.

They mentioned an audit bill that got passed last year. I made it clear that audits were not enough! We need transparent accurate counts election night*. Audits only tell us how off the results were and predict if outcomes were impacted. They don’t fix anything and they don’t prevent anything. We have to do that part too.

I am writing the ideas I hope I conveyed. I’m more eloquent on the internet that I am in person. I was dorky and awkward as always. I’ve accepted that about myself and my usual audience (engineers and/or math students) are fairly tolerant of my missing social cues as long as my math is good. But this was not my usual audience.

I got chided by the Chair about speaking off-topic. I got a lecture by Senator Miller about the unreliability of exit polls – which included a well-delivered “ma’am” that shut me off when I tried to interrupt him. Senator Faust-Goudeau, who had encouraged me to come, publicly thanked me for my testimony.

A few ladies from the League of Women Voters and Representative Elizabeth Bishop from my district came and sat in on my speech for moral support. All-in-all, it went about as well as talking about the elephant in the room usually does.

Rather than making a break for the door and heading home as soon as I finished my testimony, I swallowed my introvert instinct and hung around until the session was over and spoke with some of the committee members afterwards.

Sen. Miller expressed how he agrees with me about the audits. I acknowledged that audits are better. They were my first choice after all. He doesn’t think that exit polls should be taken seriously but acknowledged it’s the best data available to Kansas voters.

Senator Faust-Goudeau asked me to help prepare a bill to get the transparency we need to have confidence in election outcomes. She has given me a spark of hope that if I work at it, change might happen. She is one awesome lady! We are very lucky to have her working for Kansans.

*I wish I had thought to say whenever the winners are announced. One of the cons of the Ranked Choice Voting system is that it may require a couple of days to collect and compute the winner of a statewide race done with that system in place. It’s a drawback I could live with to get more representative outcomes in our elections.

Testimony for the Oct 27th meeting of the Joint committee on Ethics and Election

Ranked Choice Voting – Excellent Idea! But only if combined with secure and transparent vote counting processes.

My name is Beth Clarkson. I am a lifelong Kansas, born in Wichita. I hold a Ph.D. in statistics and have been certified as a quality engineer by the American Society for Quality for the past 30 years. Over the past several years, I have become more and more concerned about the accuracy of our voting machines, which has never been evaluated post-election via a hand count of election results. I have attempted to get access to the records needed to perform an audit of our voting machines more than once over the past several years but been told no every single time.
Having failed to receive permission to do an audit, on Nov 8th 2016 , with the help of volunteers, set up citizen’s exit polls for five locations in south central Kansas. This was our attempt to find out the accuracy of our voting machines.

I’m afraid that the evidence from those exit polls point overwhelmingly to our voting machines being manipulated. Not by enough to alter any outcomes in the races studied – the maximum deviation between our exit polls and the official results was less than 5% in suspect races. But still extremely troubling to me as a voting citizen of Kansas. I have submitted these results and my conclusions for peer review. I will be happy to provide an electronic copy of this paper on request. Today, I will simply summarize the findings.

A common question I get regarding these findings is “Couldn’t your results be due Republicans being less likely to fill out the exit poll survey?” The answer to that question lies in the patterns shown in the different races. Whilst certainty is never forthcoming from statistical analysis, the hypothesis that ‘Party X member respond to surveys at a different rate than others’ is a plausible explanation for only the Libertarian Party in two races. It does suffice for any others.

There are a number of statistically significant differences between our exit poll results and the official results, randomly scattered through the five locations and both methods (voting machines and paper ballots counted electronically). The scattered anomalies found are likely due to issues of process reliability without cause to suspect malicious intent. Of course, all anomalous findings should be investigated to determine the cause and appropriate corrective action because whether deliberate or inadvertent, the errors indicate that the election results might have been compromised. That won’t be happening though. The output of electronic voting equipment in Kansas is never verified post-election.

The results for the Presidential race look very suspicious. In Wichita and Winfield, four out the five sites, votes appear to be shifted from Clinton to Trump. Results in the fifth site, Wellington, showed substantial errors in the opposite direction. Results for the four Supreme Court justices opposed by Governor Brownback show a similar pattern nearly double in magnitude. This is not plausibly due to Republicans and Democrats having different propensities to respond to the Exit poll. If that were the case, we would see the same pattern in all locations and methods and races. We don’t. This looks like malicious tampering of the results by at least two different parties with opposite intentions.
These findings could be easily proven wrong with an audit of the results in those locations, except that only Sedgwick County has a paper trail. A paper trail that is, apparently, forbidden ever to be seen by human eyes.

Our machines should not be considered trustworthy without having a paper trail and verifying the count afterwards. These steps are the minimal precautionary measures needed according to the testimony of Dr. Andrew Appel of Princeton University to the Presidential Advisory Commission on Election Integrity Resources last month.

Sedgwick County purchased new machines and placed them in use in the special election in April. Immediately after the election and several times since then, via phone and email, I inquired of the Sedgwick County Elections office regarding what verification or auditing of the results of these new machines has been done or planned. I received the following response last week:
“State statutes have not changed regarding the ability of an election official to conduct post-election audits of voting equipment. Until such time as that occurs, we are unable to audit the voting equipment. Sedgwick County and this office strongly support legislation that permits post-election audits but this is a matter to be decided in the state legislature.” email on Oct 19, 2017 from Sandra L. Gritz, Chief Deputy Election Commissioner, Sedgwick County Election Office

Democracy requires transparency in the vote count. We don’t have that. New machines that aren’t verified are not an improvement. Citizens, such as myself, have no cause to have faith in the reported results. Further, faith in the reported electronically computed election results require verification done in a transparent and secure manner because audits can be rigged as easily as voting machines.

If this sounds crazy, I would remind the committee of the 2015 diesel emission cheating scandal, in which VW was caught installing secret software in more than half a million vehicles sold in the US that it used to fool exhaust emissions tests. Pre-election testing of the voting machines is not sufficient to guarantee accuracy.

Equifax is merely the latest in the seemingly endless procession of data breaches, which includes multi-national corporations as well as federal and state agencies including the CIA, the NSA, US Postal Regulatory Commission, the US Department of Housing and Urban Development, the Health Resources and Services Administration, the National Oceanic and Atmospheric Administration, and the U.S. Election Assistance Commission. That last, the Election Assistance Commission is mandated, among its many other responsibilities, with testing and certifying voting equipment.
As our elected representatives overseeing the voting process, I hope you will rectify this situation and allow all Kansas voters the right to see and count ballots for themselves or to see them counted by someone they find trustworthy. Transparency means having a paper trail and allowing voters access to that paper trail.

You may contact me for more information or a copy of my journal paper at Beth@bethclarkson.com

Provisional Voters Analysis

The Difference between Provisional Votes and Counted Votes in November 2016 Exit Poll

Before examining the exit poll results for provisional voters and counted voters, it is worth noting that for the five sites we collected data on, the percent of provisional voters with respect to the total number of counted votes has a near perfect correlation with the percent of registered Party Members for those sites (see Table below). The Democrats had a correlation of 0.9677, while the correlation was slightly higher for both the Republicans in the opposite direction. The party percentages are not independent of each other, so we expect similar correlations.

The Winfield results are not included in this analysis due to the low number (13) of provisional ballot surveys from that site. Urban Wichita is included because the concern regarding paper ballots being contaminated with provisional ballot voters will, even if true, decrease the probability of finding statistically significant differences between the provisional ballot votes and the counted votes.

Contamination the other direction is a concern for SE and SW Wichita as they have higher rates of provisional ballot voters than other voters. Contamination either direction will dilute the probability of seeing a statistically significant difference, it will not increase the probability of a type I error, so conclusions of statistically significant differences will hold even if some erroneous mixing of the groups occurred due to respondent error.

These results are also independent of any latent response bias in the survey sample due to party affiliation. If there was a party bias to responding the exit poll, it could be presumed consistent relative to being found unregistered or without adequate ID, thus necessitating a provisional vote. Results are shown below.


Site Statistics for Provisional Ballots
Site Total Votes Prov Votes Prov Vote % % Reg Rep % Reg Dem % Reg Lib Exit Poll Prov % Prov % Exit Poll
SW 1796 78 4.34 43.33 17.17 1.06 79 101.3 5.51
SE 1323 92 6.95 29.04 30.39 0.95 79 85.87 8.54
Urb 1113 160 14.38 8.58 54.79 0.52 101 63.13 11.44
Win 2494 89 3.57 45.54 24.95 0.70 49 55.06 3.20
Well 2280 81 3.55 47.01 21.78 0.75 13 16.05 2.20

We can use the binomial test for the provisional versus counted ballots. They are two separate samples and one is not a subset of the other, which rules out the hypergeometric test used with the machine and paper ballots analysis. The binomial test was done for each candidate and judge response with the results shown below in Table 11.

The differences in vote share between the provisional and the counted voters in the exit poll will fit student’s t-distribution. We will determine if the provisional voters in our exit poll were statistically significantly different from registered voters with proper ID.

There isn’t sufficient data to warrant reporting results for the Independent candidate or the Green party candidate. The candidate differences are not independent within a race (they will sum to zero) but the results for the three candidate races are independent of each other.

Each Judge is an independent contest relative to the other four judges and the three candidate races.

A paired t-test was performed on the vote share ratios for the Republican, Democrat and Libertarian candidates in all three candidate races and another for the yes, no, and blank responses for the five judges to determine if there was any bias with respect to any particular party or retention vote.


Paired t-test Results
Response Avg Diff LCL* UCL* p-value
Democrat 0.20% -3.19% 3.58% 0.9008
Libertarian -2.45% -4.31% -0.59% 0.0146
Republican 5.11% 1.81% 8.41% 0.0058
Other/Blank -1.96% -4.14% 0.22% 0.0729
Judges – Yes 7.93% 5.67% 10.19% <0.0001
Judges – No 1.03% -1.66% 3.72% 0.4313
Judges – Blank -8.96% -10.64% -7.29% <0.0001

Republicans show a statistically significant difference with provisional voters being between 1.81% and 8.41% (average 5.11%) less likely to vote for Republican candidates compared to voters whose ballots were counted that day.

Libertarians show a statistically significant difference using the t-test with an increase between 0.59% and 4.31% (average 5.11%) in vote share from the provisional voters.

Neither the Democrats nor the ‘Write in/Left Blank” responders showed a statistically significant difference across the four sites and three races between the provisional voters and the regular voters with the t-test.

Provisional votes for the Kansas Supreme Court Justices show a distinct pattern of provisional voters being nearly 8% (average 7.93%) less likely to vote yes for all judges than the counted voters and nearly 9% (average 8.96%) more likely to indicate that they did not vote in that contest. The uncounted provisional ballots would not have altered the outcome of the races studied.

*LCL and UCL refer to the Lower and Upper Limits of the 95% Confidence Interval. If one is positive and the other negative, then we can presume there is no statistically significant difference between the counted voters and the provisional voters.

The Clarkson Curse – Never seen this happen before!

I recently received the following email from my lawyer’s para:

Randy asked me to send you the attached order from the Court of Appeals requiring supplemental briefing. He said to tell you this has never happened to him before.
Order for Supplemental Briefing

We Clarksons, or at least the branch I belong to, have a long-running family joke about a family curse. The key words are something like “This has never happened before” being uttered to us by professionals, usually involved with major repairs. This time, it’s extra work for no pay for my attorney, Randy Rathbun. I want to thank him for his continued efforts on this case. I’d have given it up before now if he weren’t there.

My reading of this order – keeping in mind that I am, in Randy’s opinion, a terrible lawyer – is that they are looking for an excuse to call it a moot question and boot it off their agenda as not worth their time. I’ve no idea how this will play out, but I trust Mr. Rathbun to do his best. Thanks for your continued interest and support.

Exit Poll Paper rejected by Political Science Journal

My paper on exit polls was rejected by the first peer review. Primarily due to the bias against using exit polls for election accuracy by academic researchers. Because they don’t trust exit polls as a measure of vote count accuracy, they don’t accept my conclusion. This is not surprising – I had journal editors who would reject my paper based on that alone without even considering sending it to reviewers. The journal I finally sent it to was the first one that was willing to at least consider it.

Because this is a controversial conclusion, I needed to include a lot of back up information justifying the conclusion. My original paper was over 8000 words, more than twice their limit. It was sent back immediately and without any reviews based on that and another minor bookkeeping error I had made. I corrected the bookkeeping error and drastically revised the paper, cutting it down to a bit over 5000 words. I am grateful they were willing to assign reviewers to it, but most of the valid criticisms were due my having cut so much of the supporting documentation out of the revision they got. This is conundrum I have not resolved yet. If I include all of the analyses they wanted to see, I will have a paper longer than peer-reviewed journal publishers are generally willing to accept.

Here is what the reviewers said and my responses to them. The reviewers comments are in italics:

Reviewer: 1

Comments to the Author

“In the absence of any deliberate manipulation of results, the difference in vote share between the official count and an exit poll (e) will be randomly distributed around zero and relatively small” (p. 6).

That’s a crazy assumption. Significant discrepancy between an official tabulation and a poll estimating the same quantity from a sample of the relevant population can follow from: (a) deliberate manipulation of the tabulation; (b) inadvertent error in the tabulation; (c) inadvertent misrepresentation of behavior by poll respondents; (d) deliberate misrepresentation of behavior by poll respondents; (e) systematic differences between those willing and those unwilling to respond to the survey, in the (almost universal) case where the survey does not cover the whole population; (f) error by the analysts comparing the poll and official results. These authors implicitly assume that (b) through (f) are negligible, which is, frankly, ludicrous.

My response:

My first draft paper presented detailed responses to these alternate explanations – the “Liars, Idiots and Introverts” section covered most of them, with some additional paragraphs scattered in other sections. I toned the section down and then ended up cutting it completely in the version he/she saw. I’ll put it back in, at least the toned down version, before I send it off again.

A priori, (a) is an unlikely culprit most of the time if only because falsifying election results is usually a felony and those rigging the outcomes would be taking large risks. That point is certainly not an “impossibility theorem,” and there are surely some cases of deliberate fraud. But (a) is not a natural first suspect, and pollsters have long warned consumers of polling data not to exaggerate the accuracy of exit (or, indeed, other) polls. See, eg, https://fivethirtyeight.com/features/ten-reasons-why-you-should-ignore-exit/.

My response:

This seems to be a serious bias against using exit poll results as a method of verifying election accuracy. There are certainly limitations to such data particularly when put to use in predicting outcomes or general trends. That was not the aim of this one. I designed this particular exit poll is not a standard design of opinion polls, but a standard design for auditing process results and isolating a problem area to assess the size of the problem. The accuracy of my calculations are based on well understood statistical methods and appropriate for the data.

I find it interesting that there is strong antipathy to using exit poll results for verifying accuracy of election results by political science academics. I have seen no solid reason for the disdain, but it has been common in my queries to editors about whether they would consider the paper at all.

One consequence of adopting this stance regarding exit poll results is that it closes off the only legitimate avenue for voters to assess the accuracy of their precinct results. Voters cannot provide sufficient evidence for academics to take their concerns seriously and start doing something to put out the fire in the theater. Any vocalized suspicion of voting machines being rigged is dismissed as tinfoil hat territory as this reviewer just did above.

I am also troubled by the idea that falsifying election results can be dismissed because it’s illegal and heavily penalized if caught. That’s like declaring a death as being more likely due to natural causes just because murder is rare, illegal and heavily penalized when caught. It does illustrate the difficulty in getting this type of controversial hypothesis through the peer-review system.

I am instructed to limit my comments to 500 words so I’ll raise only a few more specific worries.

Responding to an uncited quotation by Thomas König (probably in an APSR rejection letter) the authors claim that an un-refereed e-book proves that discrepancy between exit polls and official results are fraud. Bollocks.

My response:

This is because I included Jonathon Simon’s “Code Red” book as a reference. There are certainly reasons to be suspicious of a self-published book of this nature. On the other hand, I have the appropriate qualifications to perform a peer review on this book. I’ve read and found the data convincing. This reviewer has scoffed at the publishing venue while demonstrating how difficult it is for such a hypothesis to make it through the peer review process.

What I was trying to express with that citation was the justification for considering the hypothesis at all and why I set up exit polls to evaluate it independently. That the hypothesis that our voting machines are being manipulated is not crazy or ludicrous, but a legitimate concern to voters. I don’t understand why academics in the field of political science are unwilling to give the hypothesis serious consideration, but they don’t appear to be willing to entertain the notion that our voting machines are being rigged with anything but ridicule.

– A second cursory argument against the polls being wrong comes up following the main analysis, when the authors claim that the discrepancies are not correlated with registration skew, but provide not clear details (this is few lines on “Corollary 1” at the top of page 10).

My response:

This section originally had a table and chart and more detail, but I removed it trying to cut it down to an acceptable length. I’ll consider putting this back in, but maybe in an appendix to keep the paper length down.

– Finally comes an ANOVA which, for the first time, acknowledges that the poll covered more races than the five justice-recall events. Insofar as systematic differences between responders and non-responders drive poll-tabulation discrepancies, differences across contests might be informative, so it is wise to use all of the information available and not analyze contests in isolation. How informative are the multiple contests depends in part on how strongly correlated are the voting patterns across contest. Generally, a serious analysis of survey-result discrepancies should make use of all of the information at hand. If the authors believe that only the KS SC contests were rigged (unnecessarily as it turns out!?), they can say so explicitly and make better use of the other races. The analysis done through page 9 in this manuscript ignores the other contests and is, accordingly, of limited value.

My response:

In the final submitted version, I took out the analyses of those races. They provide more support for the rigged hypothesis, with four additional anomalies, only one of which has a reasonable alternative explanation. Thinking like a mathematician, since the Judge Races were sufficient to prove deliberate manipulation, the other races did not need to be explicitly covered. For a mathematician this is logical. But based on this comment, that decision was not a good choice.

– “The relatively low response rate for provisional ballots and relatively high rate for the scanned paper ballots at the Urban Wichita and Sumner County sites indicate that some provisional voters mistakenly marked that they had used a paper ballot.” Sounds reasonable, but this is not a minor detail, but, rather an important indication that option (c) above (inadvertent errors in describing behavior by poll respondents) is in evidence. Poll respondents who mistakenly misreport their method of voting can, equally, misreport how they voted.

My response:

Actually, no, I would disagree that they can equally misreport how they voted. Being mistaken about which type of paper ballot was used is an understandable and easy error for people to make. If the provisional voters have differences in their choices (a question that has not been answered), it will lead to bias in results if they are included.

Mistakenly answering Yes versus No can be expected to be a considerably less frequent error. It’s also reasonable to assume that inadvertent errors of that nature will be randomly distributed among all responses, so such errors – as I explicitly assumed and this reviewer rejected as “crazy” – will be randomly distributed around zero and relatively small. Thus, no bias is expected from such errors. This is an important distinction between the two types of inadvertent errors, since bias is what we are attempting to measure and then attribute to machine rigging. I’m not sure how to revise the paper to deal with this criticism. I will have to give it considerable thought.

– Figure 5 shows a statistically significant Democratic effect. The authors briefly assert that, “the potential sampling bias is not large enough to explain the SCJ results” (p. 11) without providing a better sense of magnitude.

My response:

This is a valid criticism I can use to improve the paper. To sense the magnitude of the difference, readers would have to compare the appropriate values from two different tables. I will add an explicit statement of the difference in magnitude and additional points or lines to the graphs to make this clear.

– What should be compared is distributions because all of the races feature multiple options, and an analysis of only a scalar (Republican vote share or Democratic vote share or “Yes” share) is incomplete. This is a simple point the author correctly broach, and that the ANOVA acknowledges (though with collapsing of abstention of “other” voting). However, this key point is ignored in their main work, as when they dwell on the yes votes for the KSC justice retention races on pages 7-9. Voting options in the SC contests were not 2 (Yes or No), but, rather, 3 (Yes, No, and Abstain/Leave Blank). These authors drop the item non-responses as though they are random and ignore the important blank/abstain shares in the main analysis, and neither move is wise.

My response:

This is a valid criticism of the analysis technique used for the judges; it’s the sort of technical nit to pick that academics like to argue over but has no impact on the results. I actually ran both types analyses and the results were very similar. I didn’t want to include both and deliberated for a while on which one to include. I ended up choosing to go with the two responses, yes and no, rather than including the non-responses as a separate category because it’s a simpler cleaner analysis for non-statisticians to understand. I’ll reconsider which to include, but even if I switch to the other type, it only changes the numbers in some tables and bullet points. Using the other approach doesn’t change the conclusions at all.

– What jumps out of Figure 3 is that the Sumner results appear to be different in kind from the other 6, insofar as all judges’ yes vote shares in the exit poll exceeded their official yes shares. The obvious candidate for an explanation is not that the cheating was of a distinct form in Sumner, but that the people implementing the exit poll forgot to include a “left it blank” option on the survey form. Apparently, more of the poll respondents who did abstain chose the wrong answer “Yes” than the wrong answer “No”, rather than leaving the poll question blank too. That pattern was un-hypothesized and is mere surmise on my part. The details of the survey matter a great deal and the blunder of omitting one of the options changed the conclusion greatly. It was a useful mistake! In that light, it is startling that the authors are so relaxed about alleging fraud based on the survey-versus-tabulation comparison, on the premise that the other surveys cannot possibly been biased or wrong in any way.

My response:

I have a definite defensive reaction to calling this difference a blunder or mistake. It was a deliberate decision made by the manager of that exit poll due to space considerations. He also included questions about other races unique to his location so space was at more of a premium for his survey form than others with fewer questions. I disagree that this difference would cause the difference seen in Sumner county, but cannot prove it. However, even if this data was dropped from the results analysis, I would still have better than 99% confidence the other sites show signs of voting machine rigging. Accepting this concern as legitimate and removing this data from the analysis would not alter the conclusions of the paper.

– Bishop, in POQ in the 90s, reported an experiment to show that it seemed to matter if exit polls were done as face-to-face interviews or as self-reported surveys. Arguably, that article showed that merely emphasizing secrecy by marking the form “Secret” helped improve response rate and accuracy (the latter is not fully clear and would not be the right conclusion in the world where tabulations are all suspect, inhabited by these authors). It would be helpful to know exactly how the volunteers who administered these exit polls obtained the answers.

My response:

Details about how the survey was conducted was included in an earlier version of this paper, but in my efforts to reduce it to an acceptable size, I may have cut the section describing how the survey was conducted too much. I can make this clearer by putting back my original longer write up for this section.

– “In order to drive down the size of the error that could be detected, we needed the sample size to be as large as possible relative to the number of votes cast.” Misleading. The effect of getting high coverage are conditional on no error in responses (deliberate in the case of socially desirable answers, inadvertent in the case of busy/distracted/annoyed respondents participating with reluctance or hurrying, etc.). And, of course, non-response is critically important, because contact is not measurement. One might use these data to compute how different the non-responders would have to be from the responders for the official data to be correct under and assumption of no-error-in-poll-responses. That would be a bit interesting, but the assumption would be hard to justify, and I’d still be disinclined to jump on the fraud bandwagon. My strong suspicion is that admitting to all of the possible sources of discrepancy will make it clear that it is not wildly implausible that those who wouldn’t talk to pollsters were a bit different from those who would and that alone could generate the gaps that so excite the authors.

My response:

I actually did compute how different the non-responders would have to be as this reviewer suggested. The results were not particularly illuminating to me and I did not include it in the paper. However, this reviewer is not the only person to have suggested it so I will include it in the next revision.

Reviewer: 2

Comments to the Author
The manuscript looks at a recent vote in Kansas and argues that the machine counted votes were manipulated.

The authors state that the best way to verify a vote count when one has no access to the voting records is an exit poll. I have two problems with this. First, exit polls have many problems and are usually regarded as less reliable than normal random sample surveys.

My response:

This seems to be a serious bias against using exit poll results as a method of verifying election accuracy. Exit polls as standardly done in the US do have many shortcomings with regard to being used to detect fraud. That was why I did not use a standard method but designed my own approach based on my statistical and quality engineering background.

Second, many alternative approaches exist that have not been discussed and build on work by Mebane, Deckert, Beber and Scacco, Leemann and Bochsler, and many others. These approaches often but not exclusively rely on Bedford’s law and test empirical implications of specific forms of fraud.

My response:

This is useful to me as it provides direction for further research. My suspicion is that the alternative approaches being referenced would require resources that are not available to me, but I will look into these authors to see if I can learn something that will improve my paper and future research.

The argument that the common tendency found in the US has to be attributed to fraud is baseless. The authors only provide one citation from a non-peer-reviewed source and do not provide other arguments to justify this claim.

My response:

This is referring to the book “Code Red” also denigrated by the first reviewer. What I was trying to express with that citation was the justification for considering the hypothesis my exit polls were set up to evaluate. I will definitely have to revise this section.

The discussion of stratified vs clustered sampling is unclear and the conclusion is confusing. The authors also mention that participation rates were excellent without providing the rates.

My response:

The discussion of stratified vs cluster sampling and the participation rates was a section that suffered from cuts made to reduce the length.

The main problem is that the authors assume the exit poll to provide something like a true benchmark.

My response:

This seems to be a serious bias against using exit poll results as a method of verifying election accuracy. These Exit polls provided the citizens of southeast Kansas with the best possible benchmark obtainable given the voting equipment and processes used in Kansas. That this benchmark is not acceptable to academics should imply that the voting equipment and processes used are unacceptable given that no other means of verifying the vote count is available to citizens.

But it is one of the fundamental lessons of social science polling that participation is correlated with socio-economic background variables. Education and survey participation are usually positively correlated and have to be corrected. Since these factors are also correlated with vote choice there is no reason to expect the exit polls in their raw form to be anywhere close to the truth.

My response:

This criticism is based on the lack of understanding of the cluster versus stratified sampling approach and the details of how the comparison was made. Using the cluster sampling approach and comparing the results by precinct and voting equipment allow us to dispense with the complicated process of adjusting the exit poll results by such factors as education level or income or race which is required when using stratified samples. The use of that technique allows for better predictions for the larger population and a more in-depth analysis of who’s voting for whom, but makes the direct comparison for the purpose of assessing the accuracy of the voting machines untenable.

In fact, Republicans are less likely to participate. The supplied ANOVA test for some implications is unclear and I would want to just see the plain participation rates correlated across all districts controlling for average education. But the non-response bias would perfectly line up with all findings and this is then the entire ball game.

My response:

Definitely need to rewrite the ANOVA analysis section. The ANOVA test showed a bias in responses against Libertarians and for Democrats but Republicans overall had bias either direction. Whether that bias is due to inherent characteristics of the party members or by rigging by the machines can be debated. The ANOVA showed that the size of that bias wasn’t enough to account for the discrepancies observed in the Supreme Court Judges and that was after assigning the democrat bias to the 4 Republican justices that Brownback opposed.

I can agree that it was be interesting to run the correlation suggested, the average education of voters by precinct is not available for Kansas precincts and I do not have the resources to compile such a database, even if only for a few counties in Kansas.

I feel that it is extremely difficult to successfully argue along these lines. The authors fall short of being fully convincing on key points. Given this and so probable alternative explanation, I feel that this paper does not achieve of what it sets out to do.

My response:

This is the bottom line for both reviewers really. I wasn’t convincing enough. I’ll have to try again.

I’m not sure what the best approach would be and appreciate any suggestions my readers have. Should I continue to try to limit the length of the paper to make it acceptable for academic publications or should I include all the information and analyses results that reviewers and readers could reasonably want to see but that inflates the length to beyond what peer-reviewed publication will consider?

If I go with the longer paper, I am basically limited to self-publishing without peer-review which is then easily dismissed by anyone who disputes the conclusions, such as the Kansas Secretary of State’s Office.

Should I seek publication in another field or a general scientific research publication? It’s less likely to get any notice by people with the ability to change our voting processes if I do that, but it would be available as a peer-reviewed reference to people who are trying to make improvements and eliminate paperless voting machines and institute auditing procedures when machines are used.

These are the questions I will ponder as I revise my paper again.