Testimony for the Oct 27th meeting of the Joint committee on Ethics and Election

Ranked Choice Voting – Excellent Idea! But only if combined with secure and transparent vote counting processes.

My name is Beth Clarkson. I am a lifelong Kansas, born in Wichita. I hold a Ph.D. in statistics and have been certified as a quality engineer by the American Society for Quality for the past 30 years. Over the past several years, I have become more and more concerned about the accuracy of our voting machines, which has never been evaluated post-election via a hand count of election results. I have attempted to get access to the records needed to perform an audit of our voting machines more than once over the past several years but been told no every single time.
Having failed to receive permission to do an audit, on Nov 8th 2016 , with the help of volunteers, set up citizen’s exit polls for five locations in south central Kansas. This was our attempt to find out the accuracy of our voting machines.

I’m afraid that the evidence from those exit polls point overwhelmingly to our voting machines being manipulated. Not by enough to alter any outcomes in the races studied – the maximum deviation between our exit polls and the official results was less than 5% in suspect races. But still extremely troubling to me as a voting citizen of Kansas. I have submitted these results and my conclusions for peer review. I will be happy to provide an electronic copy of this paper on request. Today, I will simply summarize the findings.

A common question I get regarding these findings is “Couldn’t your results be due Republicans being less likely to fill out the exit poll survey?” The answer to that question lies in the patterns shown in the different races. Whilst certainty is never forthcoming from statistical analysis, the hypothesis that ‘Party X member respond to surveys at a different rate than others’ is a plausible explanation for only the Libertarian Party in two races. It does suffice for any others.

There are a number of statistically significant differences between our exit poll results and the official results, randomly scattered through the five locations and both methods (voting machines and paper ballots counted electronically). The scattered anomalies found are likely due to issues of process reliability without cause to suspect malicious intent. Of course, all anomalous findings should be investigated to determine the cause and appropriate corrective action because whether deliberate or inadvertent, the errors indicate that the election results might have been compromised. That won’t be happening though. The output of electronic voting equipment in Kansas is never verified post-election.

The results for the Presidential race look very suspicious. In Wichita and Winfield, four out the five sites, votes appear to be shifted from Clinton to Trump. Results in the fifth site, Wellington, showed substantial errors in the opposite direction. Results for the four Supreme Court justices opposed by Governor Brownback show a similar pattern nearly double in magnitude. This is not plausibly due to Republicans and Democrats having different propensities to respond to the Exit poll. If that were the case, we would see the same pattern in all locations and methods and races. We don’t. This looks like malicious tampering of the results by at least two different parties with opposite intentions.
These findings could be easily proven wrong with an audit of the results in those locations, except that only Sedgwick County has a paper trail. A paper trail that is, apparently, forbidden ever to be seen by human eyes.

Our machines should not be considered trustworthy without having a paper trail and verifying the count afterwards. These steps are the minimal precautionary measures needed according to the testimony of Dr. Andrew Appel of Princeton University to the Presidential Advisory Commission on Election Integrity Resources last month.

Sedgwick County purchased new machines and placed them in use in the special election in April. Immediately after the election and several times since then, via phone and email, I inquired of the Sedgwick County Elections office regarding what verification or auditing of the results of these new machines has been done or planned. I received the following response last week:
“State statutes have not changed regarding the ability of an election official to conduct post-election audits of voting equipment. Until such time as that occurs, we are unable to audit the voting equipment. Sedgwick County and this office strongly support legislation that permits post-election audits but this is a matter to be decided in the state legislature.” email on Oct 19, 2017 from Sandra L. Gritz, Chief Deputy Election Commissioner, Sedgwick County Election Office

Democracy requires transparency in the vote count. We don’t have that. New machines that aren’t verified are not an improvement. Citizens, such as myself, have no cause to have faith in the reported results. Further, faith in the reported electronically computed election results require verification done in a transparent and secure manner because audits can be rigged as easily as voting machines.

If this sounds crazy, I would remind the committee of the 2015 diesel emission cheating scandal, in which VW was caught installing secret software in more than half a million vehicles sold in the US that it used to fool exhaust emissions tests. Pre-election testing of the voting machines is not sufficient to guarantee accuracy.

Equifax is merely the latest in the seemingly endless procession of data breaches, which includes multi-national corporations as well as federal and state agencies including the CIA, the NSA, US Postal Regulatory Commission, the US Department of Housing and Urban Development, the Health Resources and Services Administration, the National Oceanic and Atmospheric Administration, and the U.S. Election Assistance Commission. That last, the Election Assistance Commission is mandated, among its many other responsibilities, with testing and certifying voting equipment.
As our elected representatives overseeing the voting process, I hope you will rectify this situation and allow all Kansas voters the right to see and count ballots for themselves or to see them counted by someone they find trustworthy. Transparency means having a paper trail and allowing voters access to that paper trail.

You may contact me for more information or a copy of my journal paper at Beth@bethclarkson.com

Provisional Voters Analysis

The Difference between Provisional Votes and Counted Votes in November 2016 Exit Poll

Before examining the exit poll results for provisional voters and counted voters, it is worth noting that for the five sites we collected data on, the percent of provisional voters with respect to the total number of counted votes has a near perfect correlation with the percent of registered Party Members for those sites (see Table below). The Democrats had a correlation of 0.9677, while the correlation was slightly higher for both the Republicans in the opposite direction. The party percentages are not independent of each other, so we expect similar correlations.

The Winfield results are not included in this analysis due to the low number (13) of provisional ballot surveys from that site. Urban Wichita is included because the concern regarding paper ballots being contaminated with provisional ballot voters will, even if true, decrease the probability of finding statistically significant differences between the provisional ballot votes and the counted votes.

Contamination the other direction is a concern for SE and SW Wichita as they have higher rates of provisional ballot voters than other voters. Contamination either direction will dilute the probability of seeing a statistically significant difference, it will not increase the probability of a type I error, so conclusions of statistically significant differences will hold even if some erroneous mixing of the groups occurred due to respondent error.

These results are also independent of any latent response bias in the survey sample due to party affiliation. If there was a party bias to responding the exit poll, it could be presumed consistent relative to being found unregistered or without adequate ID, thus necessitating a provisional vote. Results are shown below.


Site Statistics for Provisional Ballots
Site Total Votes Prov Votes Prov Vote % % Reg Rep % Reg Dem % Reg Lib Exit Poll Prov % Prov % Exit Poll
SW 1796 78 4.34 43.33 17.17 1.06 79 101.3 5.51
SE 1323 92 6.95 29.04 30.39 0.95 79 85.87 8.54
Urb 1113 160 14.38 8.58 54.79 0.52 101 63.13 11.44
Win 2494 89 3.57 45.54 24.95 0.70 49 55.06 3.20
Well 2280 81 3.55 47.01 21.78 0.75 13 16.05 2.20

We can use the binomial test for the provisional versus counted ballots. They are two separate samples and one is not a subset of the other, which rules out the hypergeometric test used with the machine and paper ballots analysis. The binomial test was done for each candidate and judge response with the results shown below in Table 11.

The differences in vote share between the provisional and the counted voters in the exit poll will fit student’s t-distribution. We will determine if the provisional voters in our exit poll were statistically significantly different from registered voters with proper ID.

There isn’t sufficient data to warrant reporting results for the Independent candidate or the Green party candidate. The candidate differences are not independent within a race (they will sum to zero) but the results for the three candidate races are independent of each other.

Each Judge is an independent contest relative to the other four judges and the three candidate races.

A paired t-test was performed on the vote share ratios for the Republican, Democrat and Libertarian candidates in all three candidate races and another for the yes, no, and blank responses for the five judges to determine if there was any bias with respect to any particular party or retention vote.


Paired t-test Results
Response Avg Diff LCL* UCL* p-value
Democrat 0.20% -3.19% 3.58% 0.9008
Libertarian -2.45% -4.31% -0.59% 0.0146
Republican 5.11% 1.81% 8.41% 0.0058
Other/Blank -1.96% -4.14% 0.22% 0.0729
Judges – Yes 7.93% 5.67% 10.19% <0.0001
Judges – No 1.03% -1.66% 3.72% 0.4313
Judges – Blank -8.96% -10.64% -7.29% <0.0001

Republicans show a statistically significant difference with provisional voters being between 1.81% and 8.41% (average 5.11%) less likely to vote for Republican candidates compared to voters whose ballots were counted that day.

Libertarians show a statistically significant difference using the t-test with an increase between 0.59% and 4.31% (average 5.11%) in vote share from the provisional voters.

Neither the Democrats nor the ‘Write in/Left Blank” responders showed a statistically significant difference across the four sites and three races between the provisional voters and the regular voters with the t-test.

Provisional votes for the Kansas Supreme Court Justices show a distinct pattern of provisional voters being nearly 8% (average 7.93%) less likely to vote yes for all judges than the counted voters and nearly 9% (average 8.96%) more likely to indicate that they did not vote in that contest. The uncounted provisional ballots would not have altered the outcome of the races studied.

*LCL and UCL refer to the Lower and Upper Limits of the 95% Confidence Interval. If one is positive and the other negative, then we can presume there is no statistically significant difference between the counted voters and the provisional voters.

Exit Poll Paper rejected by Political Science Journal

My paper on exit polls was rejected by the first peer review. Primarily due to the bias against using exit polls for election accuracy by academic researchers. Because they don’t trust exit polls as a measure of vote count accuracy, they don’t accept my conclusion. This is not surprising – I had journal editors who would reject my paper based on that alone without even considering sending it to reviewers. The journal I finally sent it to was the first one that was willing to at least consider it.

Because this is a controversial conclusion, I needed to include a lot of back up information justifying the conclusion. My original paper was over 8000 words, more than twice their limit. It was sent back immediately and without any reviews based on that and another minor bookkeeping error I had made. I corrected the bookkeeping error and drastically revised the paper, cutting it down to a bit over 5000 words. I am grateful they were willing to assign reviewers to it, but most of the valid criticisms were due my having cut so much of the supporting documentation out of the revision they got. This is conundrum I have not resolved yet. If I include all of the analyses they wanted to see, I will have a paper longer than peer-reviewed journal publishers are generally willing to accept.

Here is what the reviewers said and my responses to them. The reviewers comments are in italics:

Reviewer: 1

Comments to the Author

“In the absence of any deliberate manipulation of results, the difference in vote share between the official count and an exit poll (e) will be randomly distributed around zero and relatively small” (p. 6).

That’s a crazy assumption. Significant discrepancy between an official tabulation and a poll estimating the same quantity from a sample of the relevant population can follow from: (a) deliberate manipulation of the tabulation; (b) inadvertent error in the tabulation; (c) inadvertent misrepresentation of behavior by poll respondents; (d) deliberate misrepresentation of behavior by poll respondents; (e) systematic differences between those willing and those unwilling to respond to the survey, in the (almost universal) case where the survey does not cover the whole population; (f) error by the analysts comparing the poll and official results. These authors implicitly assume that (b) through (f) are negligible, which is, frankly, ludicrous.

My response:

My first draft paper presented detailed responses to these alternate explanations – the “Liars, Idiots and Introverts” section covered most of them, with some additional paragraphs scattered in other sections. I toned the section down and then ended up cutting it completely in the version he/she saw. I’ll put it back in, at least the toned down version, before I send it off again.

A priori, (a) is an unlikely culprit most of the time if only because falsifying election results is usually a felony and those rigging the outcomes would be taking large risks. That point is certainly not an “impossibility theorem,” and there are surely some cases of deliberate fraud. But (a) is not a natural first suspect, and pollsters have long warned consumers of polling data not to exaggerate the accuracy of exit (or, indeed, other) polls. See, eg, https://fivethirtyeight.com/features/ten-reasons-why-you-should-ignore-exit/.

My response:

This seems to be a serious bias against using exit poll results as a method of verifying election accuracy. There are certainly limitations to such data particularly when put to use in predicting outcomes or general trends. That was not the aim of this one. I designed this particular exit poll is not a standard design of opinion polls, but a standard design for auditing process results and isolating a problem area to assess the size of the problem. The accuracy of my calculations are based on well understood statistical methods and appropriate for the data.

I find it interesting that there is strong antipathy to using exit poll results for verifying accuracy of election results by political science academics. I have seen no solid reason for the disdain, but it has been common in my queries to editors about whether they would consider the paper at all.

One consequence of adopting this stance regarding exit poll results is that it closes off the only legitimate avenue for voters to assess the accuracy of their precinct results. Voters cannot provide sufficient evidence for academics to take their concerns seriously and start doing something to put out the fire in the theater. Any vocalized suspicion of voting machines being rigged is dismissed as tinfoil hat territory as this reviewer just did above.

I am also troubled by the idea that falsifying election results can be dismissed because it’s illegal and heavily penalized if caught. That’s like declaring a death as being more likely due to natural causes just because murder is rare, illegal and heavily penalized when caught. It does illustrate the difficulty in getting this type of controversial hypothesis through the peer-review system.

I am instructed to limit my comments to 500 words so I’ll raise only a few more specific worries.

Responding to an uncited quotation by Thomas König (probably in an APSR rejection letter) the authors claim that an un-refereed e-book proves that discrepancy between exit polls and official results are fraud. Bollocks.

My response:

This is because I included Jonathon Simon’s “Code Red” book as a reference. There are certainly reasons to be suspicious of a self-published book of this nature. On the other hand, I have the appropriate qualifications to perform a peer review on this book. I’ve read and found the data convincing. This reviewer has scoffed at the publishing venue while demonstrating how difficult it is for such a hypothesis to make it through the peer review process.

What I was trying to express with that citation was the justification for considering the hypothesis at all and why I set up exit polls to evaluate it independently. That the hypothesis that our voting machines are being manipulated is not crazy or ludicrous, but a legitimate concern to voters. I don’t understand why academics in the field of political science are unwilling to give the hypothesis serious consideration, but they don’t appear to be willing to entertain the notion that our voting machines are being rigged with anything but ridicule.

– A second cursory argument against the polls being wrong comes up following the main analysis, when the authors claim that the discrepancies are not correlated with registration skew, but provide not clear details (this is few lines on “Corollary 1” at the top of page 10).

My response:

This section originally had a table and chart and more detail, but I removed it trying to cut it down to an acceptable length. I’ll consider putting this back in, but maybe in an appendix to keep the paper length down.

– Finally comes an ANOVA which, for the first time, acknowledges that the poll covered more races than the five justice-recall events. Insofar as systematic differences between responders and non-responders drive poll-tabulation discrepancies, differences across contests might be informative, so it is wise to use all of the information available and not analyze contests in isolation. How informative are the multiple contests depends in part on how strongly correlated are the voting patterns across contest. Generally, a serious analysis of survey-result discrepancies should make use of all of the information at hand. If the authors believe that only the KS SC contests were rigged (unnecessarily as it turns out!?), they can say so explicitly and make better use of the other races. The analysis done through page 9 in this manuscript ignores the other contests and is, accordingly, of limited value.

My response:

In the final submitted version, I took out the analyses of those races. They provide more support for the rigged hypothesis, with four additional anomalies, only one of which has a reasonable alternative explanation. Thinking like a mathematician, since the Judge Races were sufficient to prove deliberate manipulation, the other races did not need to be explicitly covered. For a mathematician this is logical. But based on this comment, that decision was not a good choice.

– “The relatively low response rate for provisional ballots and relatively high rate for the scanned paper ballots at the Urban Wichita and Sumner County sites indicate that some provisional voters mistakenly marked that they had used a paper ballot.” Sounds reasonable, but this is not a minor detail, but, rather an important indication that option (c) above (inadvertent errors in describing behavior by poll respondents) is in evidence. Poll respondents who mistakenly misreport their method of voting can, equally, misreport how they voted.

My response:

Actually, no, I would disagree that they can equally misreport how they voted. Being mistaken about which type of paper ballot was used is an understandable and easy error for people to make. If the provisional voters have differences in their choices (a question that has not been answered), it will lead to bias in results if they are included.

Mistakenly answering Yes versus No can be expected to be a considerably less frequent error. It’s also reasonable to assume that inadvertent errors of that nature will be randomly distributed among all responses, so such errors – as I explicitly assumed and this reviewer rejected as “crazy” – will be randomly distributed around zero and relatively small. Thus, no bias is expected from such errors. This is an important distinction between the two types of inadvertent errors, since bias is what we are attempting to measure and then attribute to machine rigging. I’m not sure how to revise the paper to deal with this criticism. I will have to give it considerable thought.

– Figure 5 shows a statistically significant Democratic effect. The authors briefly assert that, “the potential sampling bias is not large enough to explain the SCJ results” (p. 11) without providing a better sense of magnitude.

My response:

This is a valid criticism I can use to improve the paper. To sense the magnitude of the difference, readers would have to compare the appropriate values from two different tables. I will add an explicit statement of the difference in magnitude and additional points or lines to the graphs to make this clear.

– What should be compared is distributions because all of the races feature multiple options, and an analysis of only a scalar (Republican vote share or Democratic vote share or “Yes” share) is incomplete. This is a simple point the author correctly broach, and that the ANOVA acknowledges (though with collapsing of abstention of “other” voting). However, this key point is ignored in their main work, as when they dwell on the yes votes for the KSC justice retention races on pages 7-9. Voting options in the SC contests were not 2 (Yes or No), but, rather, 3 (Yes, No, and Abstain/Leave Blank). These authors drop the item non-responses as though they are random and ignore the important blank/abstain shares in the main analysis, and neither move is wise.

My response:

This is a valid criticism of the analysis technique used for the judges; it’s the sort of technical nit to pick that academics like to argue over but has no impact on the results. I actually ran both types analyses and the results were very similar. I didn’t want to include both and deliberated for a while on which one to include. I ended up choosing to go with the two responses, yes and no, rather than including the non-responses as a separate category because it’s a simpler cleaner analysis for non-statisticians to understand. I’ll reconsider which to include, but even if I switch to the other type, it only changes the numbers in some tables and bullet points. Using the other approach doesn’t change the conclusions at all.

– What jumps out of Figure 3 is that the Sumner results appear to be different in kind from the other 6, insofar as all judges’ yes vote shares in the exit poll exceeded their official yes shares. The obvious candidate for an explanation is not that the cheating was of a distinct form in Sumner, but that the people implementing the exit poll forgot to include a “left it blank” option on the survey form. Apparently, more of the poll respondents who did abstain chose the wrong answer “Yes” than the wrong answer “No”, rather than leaving the poll question blank too. That pattern was un-hypothesized and is mere surmise on my part. The details of the survey matter a great deal and the blunder of omitting one of the options changed the conclusion greatly. It was a useful mistake! In that light, it is startling that the authors are so relaxed about alleging fraud based on the survey-versus-tabulation comparison, on the premise that the other surveys cannot possibly been biased or wrong in any way.

My response:

I have a definite defensive reaction to calling this difference a blunder or mistake. It was a deliberate decision made by the manager of that exit poll due to space considerations. He also included questions about other races unique to his location so space was at more of a premium for his survey form than others with fewer questions. I disagree that this difference would cause the difference seen in Sumner county, but cannot prove it. However, even if this data was dropped from the results analysis, I would still have better than 99% confidence the other sites show signs of voting machine rigging. Accepting this concern as legitimate and removing this data from the analysis would not alter the conclusions of the paper.

– Bishop, in POQ in the 90s, reported an experiment to show that it seemed to matter if exit polls were done as face-to-face interviews or as self-reported surveys. Arguably, that article showed that merely emphasizing secrecy by marking the form “Secret” helped improve response rate and accuracy (the latter is not fully clear and would not be the right conclusion in the world where tabulations are all suspect, inhabited by these authors). It would be helpful to know exactly how the volunteers who administered these exit polls obtained the answers.

My response:

Details about how the survey was conducted was included in an earlier version of this paper, but in my efforts to reduce it to an acceptable size, I may have cut the section describing how the survey was conducted too much. I can make this clearer by putting back my original longer write up for this section.

– “In order to drive down the size of the error that could be detected, we needed the sample size to be as large as possible relative to the number of votes cast.” Misleading. The effect of getting high coverage are conditional on no error in responses (deliberate in the case of socially desirable answers, inadvertent in the case of busy/distracted/annoyed respondents participating with reluctance or hurrying, etc.). And, of course, non-response is critically important, because contact is not measurement. One might use these data to compute how different the non-responders would have to be from the responders for the official data to be correct under and assumption of no-error-in-poll-responses. That would be a bit interesting, but the assumption would be hard to justify, and I’d still be disinclined to jump on the fraud bandwagon. My strong suspicion is that admitting to all of the possible sources of discrepancy will make it clear that it is not wildly implausible that those who wouldn’t talk to pollsters were a bit different from those who would and that alone could generate the gaps that so excite the authors.

My response:

I actually did compute how different the non-responders would have to be as this reviewer suggested. The results were not particularly illuminating to me and I did not include it in the paper. However, this reviewer is not the only person to have suggested it so I will include it in the next revision.

Reviewer: 2

Comments to the Author
The manuscript looks at a recent vote in Kansas and argues that the machine counted votes were manipulated.

The authors state that the best way to verify a vote count when one has no access to the voting records is an exit poll. I have two problems with this. First, exit polls have many problems and are usually regarded as less reliable than normal random sample surveys.

My response:

This seems to be a serious bias against using exit poll results as a method of verifying election accuracy. Exit polls as standardly done in the US do have many shortcomings with regard to being used to detect fraud. That was why I did not use a standard method but designed my own approach based on my statistical and quality engineering background.

Second, many alternative approaches exist that have not been discussed and build on work by Mebane, Deckert, Beber and Scacco, Leemann and Bochsler, and many others. These approaches often but not exclusively rely on Bedford’s law and test empirical implications of specific forms of fraud.

My response:

This is useful to me as it provides direction for further research. My suspicion is that the alternative approaches being referenced would require resources that are not available to me, but I will look into these authors to see if I can learn something that will improve my paper and future research.

The argument that the common tendency found in the US has to be attributed to fraud is baseless. The authors only provide one citation from a non-peer-reviewed source and do not provide other arguments to justify this claim.

My response:

This is referring to the book “Code Red” also denigrated by the first reviewer. What I was trying to express with that citation was the justification for considering the hypothesis my exit polls were set up to evaluate. I will definitely have to revise this section.

The discussion of stratified vs clustered sampling is unclear and the conclusion is confusing. The authors also mention that participation rates were excellent without providing the rates.

My response:

The discussion of stratified vs cluster sampling and the participation rates was a section that suffered from cuts made to reduce the length.

The main problem is that the authors assume the exit poll to provide something like a true benchmark.

My response:

This seems to be a serious bias against using exit poll results as a method of verifying election accuracy. These Exit polls provided the citizens of southeast Kansas with the best possible benchmark obtainable given the voting equipment and processes used in Kansas. That this benchmark is not acceptable to academics should imply that the voting equipment and processes used are unacceptable given that no other means of verifying the vote count is available to citizens.

But it is one of the fundamental lessons of social science polling that participation is correlated with socio-economic background variables. Education and survey participation are usually positively correlated and have to be corrected. Since these factors are also correlated with vote choice there is no reason to expect the exit polls in their raw form to be anywhere close to the truth.

My response:

This criticism is based on the lack of understanding of the cluster versus stratified sampling approach and the details of how the comparison was made. Using the cluster sampling approach and comparing the results by precinct and voting equipment allow us to dispense with the complicated process of adjusting the exit poll results by such factors as education level or income or race which is required when using stratified samples. The use of that technique allows for better predictions for the larger population and a more in-depth analysis of who’s voting for whom, but makes the direct comparison for the purpose of assessing the accuracy of the voting machines untenable.

In fact, Republicans are less likely to participate. The supplied ANOVA test for some implications is unclear and I would want to just see the plain participation rates correlated across all districts controlling for average education. But the non-response bias would perfectly line up with all findings and this is then the entire ball game.

My response:

Definitely need to rewrite the ANOVA analysis section. The ANOVA test showed a bias in responses against Libertarians and for Democrats but Republicans overall had bias either direction. Whether that bias is due to inherent characteristics of the party members or by rigging by the machines can be debated. The ANOVA showed that the size of that bias wasn’t enough to account for the discrepancies observed in the Supreme Court Judges and that was after assigning the democrat bias to the 4 Republican justices that Brownback opposed.

I can agree that it was be interesting to run the correlation suggested, the average education of voters by precinct is not available for Kansas precincts and I do not have the resources to compile such a database, even if only for a few counties in Kansas.

I feel that it is extremely difficult to successfully argue along these lines. The authors fall short of being fully convincing on key points. Given this and so probable alternative explanation, I feel that this paper does not achieve of what it sets out to do.

My response:

This is the bottom line for both reviewers really. I wasn’t convincing enough. I’ll have to try again.

I’m not sure what the best approach would be and appreciate any suggestions my readers have. Should I continue to try to limit the length of the paper to make it acceptable for academic publications or should I include all the information and analyses results that reviewers and readers could reasonably want to see but that inflates the length to beyond what peer-reviewed publication will consider?

If I go with the longer paper, I am basically limited to self-publishing without peer-review which is then easily dismissed by anyone who disputes the conclusions, such as the Kansas Secretary of State’s Office.

Should I seek publication in another field or a general scientific research publication? It’s less likely to get any notice by people with the ability to change our voting processes if I do that, but it would be available as a peer-reviewed reference to people who are trying to make improvements and eliminate paperless voting machines and institute auditing procedures when machines are used.

These are the questions I will ponder as I revise my paper again.

Summary of the 2016 Citizens Exit Polls in Kansas

 

The exit poll results from  all five polling locations in Southeast Kansas show strong evidence of election fraud in both the patterns and size of the errors.

I had major concerns with the accuracy of our voting machines based on my previous analyses, which is why these exit polls were run. The results confirm those suspicions.

Exit Poll Errors for Kansas Supreme Court Judges with Pres. Race Errors
Exit Poll Errors for Kansas Supreme Court Judges and Pres. Race

I designed this exit poll to check whether or not our voting machines are giving us accurate counts.  I looked into our local election statistics in the past and found concerning indications of fraud in the data.   There is no public official reconciliation of the paper records with the official vote counts provided by machine nor are citizens allowed access to do it.  I have the credentials to do this; I have a Ph.D. in statistics and have been certified by the ASQ as a quality engineer since 1987. I was able to recruit enough concerned voters to man the exit polls from open to close on election day.

Voters were asked how they voted – by machine, a scanned paper ballot, or an uncounted provisional ballot.   Results from the polling location give us the breakdown by machine votes and scanned ballots, which can be directly compared. The electronic voting machines used in all three Kansas counties were ES&S Ivotronic.  The paper ballot scanning equipment varied, but was all from the same manufacturer: ES&S.

The results from these exit polls tell a consistent, albeit unpleasant, story:  Our electronic voting machines should not be trusted.  Scanned paper ballots have been impacted as well, but due to some technical issues regarding the data, results on that type of counting machinery are less compelling.  Scanned paper ballot results often continued the pattern of the voting machine results, which does add to the weight of evidence against the accuracy of the official results.

I have posted the data from our exit poll and the corresponding official vote counts at Exit Poll Data

These exit poll results clearly point to manipulation of the machine counts of our votes. These are not random errors. There is no other reasonable explanation for large and consistent errors in favor (or against) particular candidates in this situation.

  • pres-results-chart-1
    Exit Poll Errors for Presidential Race
    Exit Poll Errors for Senate Race
    Exit Poll Errors for Senate Race
    Exit Poll Errors for 4th Congressional Race
    Exit Poll Errors for 4th Congressional Race
    Exit Poll Errors for Kansas Supreme Court Judges by Judge
    Exit Poll Errors for Kansas Supreme Court Judges by Judge

    Presidential race results show votes shifted from Clinton to Trump in four of the five locations – all except Sumner County. 

  • Votes in the Senate and 4th district Rep races were skewed toward the Libertarians at all five exit poll locations.  
  • The data from the Supreme Court Judges show Yes votes stolen in four of the five locations – all except Sumner County, where they received extra Yes votes.    

The analysis details are posted at Analysis of 2016 Citizens Exit Poll in Southeast Kansas

There is one ray of sunshine in these results – while the size of the shifts are cause for grave concern about the accuracy of the vote count, they are not sufficient to have altered the outcome in any of the races mentioned above.  Kansas was Trump territory.  The Judges all retained their positions.  No Libertarians won.

This ‘ray of sunshine’ is limited to these results.  Races polled at only one or two polling locations look even worse. There was a more than 10% shift in votes from Norton to O’Donnell in the Sedgwick County Commissioner third district race, easily sufficient to alter the winner*.  The data from these local races may only affect a portion of the voters at the polling site. For that reason, the data from those races is not as solid.  The lower quantity and quality of data in those races reduces  confidence in any conclusions regarding the results.

Who’s doing this and How?  I don’t know. My analyses shows which candidates lost votes or benefited, but that’s not justification for assuming they are knowledgeable regarding the vote theft.  There’s only one conclusion about the perpetrators I can come to.

Multiple Agents – The profile of errors from Sumner County is so different from the other sites, I can conclude that more than one agent successfully altered voting machine counts in S.E. Kansas polling stations.

 

Analysis of 2016 Citizens Exit Poll in Southeast Kansas

The post is a detailed analysis of races that were common to all five exit polls   linked here:   exit poll data

In the absence of election fraud, the difference in vote share between the official count and an exit poll (called the error) will be randomly distributed (both positive and negative) and relatively small.  If voting machine counts have been altered, we will see telltale patterns in these error measurements. We can determine if our machine votes are being counted honestly or if some candidates benefit and others are victimized by election fraud.  The exit poll results from  all five polling locations show strong evidence of election fraud in both the patterns and size of the errors.

 

SE Wichita President Race % Vote Share for Official Count and Exit Poll
SE Wichita President Race % Vote Share for Official Count and Exit Poll

EXAMPLE: The graph above shows the results for the presidential race from SE Wichita. According to the machine totals, Hillary Clinton received 435 votes out of 983 cast on the voting machines there.  That’s a 44.25% vote share.  Our exit poll data showed Hillary Clinton received 306 votes out of 645 survey responses to this question from voters who cast their votes on those same machines at that polling location.  That’s a 47.44% vote share.  The difference between those two values, -3.19%, is the error, illustrated in the graph below. This error measurement is computed for each candidate, race, type of voting equipment and polling location.

There were some  problems with some of the data. I have included data from all five sites for their electronic voting machine counts.  The link above gives the raw data for both voting machines and scanned paper ballots for all five sites, but only three of the five sites had sufficiently high quality data to be included in this analysis. This post discusses what data was left out and why.

Presidential race results show votes shifted from Clinton to Trump in four of the five locations.  The errors for the presidential candidates by site and voting equipment are shown in the table below.

Exit Poll Errors for Presidential Race
Exit Poll Errors for Presidential Race

These values are also shown in the chart below.  Johnson and Stein errors look random and reasonable.  Clinton and Trump errors are much larger and roughly match on the DRE machines with votes shifting from Clinton to Trump in four of the five polling locations.

pres-results-chart-1
Exit Poll Errors for Presidential Race

To statistically analyze the size of errors, use the hypergeometric distribution.  This computation is available in EXCEL as HYPGEOM.DIST.  It takes into account both the size of the population (total voters in the official count) and the sample size (total exit poll responses) in computing the probability of getting an error as large or larger than our exit poll had. See this post for the technical details about how this computation is done.

The p-values for two-sided tests are given in tables below.  Yellow indicates a statistical flag, a probability of less than 5% occurring if there was no election fraud.  Bold red numbers indicate probabilities of less than 1 in 1,000.

pres-results-table-2
Probabilities for the Exit Poll Results of the Presidential Race

The p-values clearly confirm the initial impression generated by the graph above: voting machine election fraud occurred in four of the five polling locations shifting votes from Clinton to Trump.

One interesting detail – Jill Stein actually received more scanned paper ballot votes in our exit poll in SW Wichita that they recorded at that site.   Since that can’t actually happen without errors or dishonesty, that probability is an absolute zero.  I wrote out ‘Zero’ to distinguish this situation from 0.0000 which indicates a probability that is below 0.00005 but still above zero.

The Senate and 4th district Rep races were skewed toward the Libertarians.   

The only pattern in these two races was that the Libertarians ALWAYS benefitted from the errors, with higher machine counts than exit poll percentages.  Both Democrat and Republican candidates lost votes, in some cases by suspiciously large amounts approximating the size of the error of another candidate.

Polling locations differed considerably.  Sumner county looks as if votes were taken from Moran (R ) in the Senate, but even more from Giroux (D) in the 4th Cong. Dist and undervotes for both races were increased.  Independent candidate Miranda Allen for the 4th district benefited by an unusual amount in the machine vote counts in all three Sedgwick County polling locations.  These errors look like fraud.

Below are tables and graphs of the errors between the official results and the exit poll results  for the Kansas Senate and 4th Congressional Districts and tables of the p-values for those errors.

Exit Poll Errors for Senate Race
Exit Poll Errors for Senate Race
Probabilities for the Exit Poll Results of the Senate Race
Probabilities for the Exit Poll Results of the Senate Race
Exit Poll Errors for Senate Race
Exit Poll Errors for Senate Race
Exit Poll Errors for 4th Congressional Race
Exit Poll Errors for 4th Congressional Race
Probabilities for the Exit Poll Results of the 4th Cong. Race
Probabilities for the Exit Poll Results of the 4th Cong. Race
Exit Poll Errors for 4th Congressional Race
Exit Poll Errors for 4th Congressional Race

The data from the Supreme Court Judges show the most clarity.  The pattern that fits across all five judges cannot be denied. In addition, the magnitude of the errors also exceeds that found in the other three races.

The four Supreme Court judges actively opposed by Gov. Brownback had Yes votes stolen in same four locations that favored Trump.  The only positive error is a tiny one for Nuss in the SE Wichita location, with the remaining errors for those four sites all showing negative for all five judges.  Sumner, different once again, showed only positive errors (More Yes votes) for all five judges

Stegall, Brownback’s only appointee up for retention, has results identical in direction to the other four, but smaller in magnitude.  He has only one slightly improbable dearth of yes votes in the scanned paper ballots in the SW Wichita location.  For Stegall, only the fact that his pattern matches the others is a sign of fraud against him. For the other judges, both the size and pattern of the errors testify to the rigging of the official counts by the machines.

Below are tables and graphs of the errors between the official results and the exit poll results  for the Kansas Supreme Court Judges Retention Votes  and tables of the p-values for those errors.   Multiple graphs of the judges are shown, grouping by judge (as previous graphs) and grouping by location.  That latter makes it undeniable that all sites show signs of corruption, although not in agreement  on the preferred direction.  Finally, a graph showing the judges next to a graph of the presidential candidates on the same scale.

Exit Poll Errors for Kansas Supreme Court Judges
Exit Poll Errors for Kansas Supreme Court Judges
Probabilities for the Exit Poll Results of the Kansas Supreme Court Judges
Probabilities for the Exit Poll Results of the Kansas Supreme Court Judges
Exit Poll Errors for Kansas Supreme Court Judges by Location
Exit Poll Errors for Kansas Supreme Court Judges by Location
Exit Poll Errors for Kansas Supreme Court Judges by Judge
Exit Poll Errors for Kansas Supreme Court Judges by Judge
Exit Poll Errors for Kansas Supreme Court Judges with Pres. Race Errors
Exit Poll Errors for Kansas Supreme Court Judges with Pres. Race Errors

This last comparison, putting the errors for the presidential race on the same scale as the judges, actually startled me when I first graphed it and I was expecting it.  The average size of the errors should be approximately the same for all the races since they are all drawing from a near identical sample of voters.  To a statistician, this increase in the magnitude of the error for the judges is another flashing red light saying that these machine results have been rigged. Rigged in different ways in different places, but all of the sites with exit polls show the telltale signs of the corruption.