A Tale of Toms, Dicks, and Harrys

For new readers, I am discussing the analysis of Exit Poll Results for S.E. Kansas in 2016.  These exit polls were performed because official audits are not done and independent audits are not permitted.  It is therefore the best information that I, as a voter in Kansas, have been able to obtain to make an independent assessment of the accuracy of their official vote totals as certified by our elected officials.

My assessment is our voting equipment was rigged!  Not by enough to sway the outcome of the elections I had adequate data on, but the official results showed differences from our exit poll results by site that were too consistently in one  direction to be anything other than deliberate.

I recently received a harsh but relatively accurate and detailed review of my exit poll paper.  The overall judgement was that I need to take the passion and conviction out of my conclusions in order to make it acceptable for a peer reviewed publication.  I must admit, I have allowed my passion to seep through, which is generally frowned upon in scientific papers.  I will rewrite it.

It is difficult to be dispassionate because this is very personal for me.  I was born in Wichita Kansas.  I ran the exit poll booth at my personal voting station, as did all of the site managers.  We are not doing this for fun and glory; I was able to get the volunteers needed to run five locations because enough people agreed with me regarding my concern about the accuracy of our official counts and were willing to spend the time and energy necessary to accomplish it.  If I want to see these results published in an academic journal, and I do, I have to tone down the anger that has bled into my writing about it.

The “Liars, Idiots and Introverts” section received some particularly scathing comments.  The reviewer found it offensive.  I’m not surprised, it was designed to be offensive.  But he/she’s right that it’s inappropriate for an academic journal.  This section was written when I was feeling particularly frustrated by people disregarding my results by claiming sampling bias – i.e. people choosing not to participate or deliberately answering incorrectly.

There is also a possibility, that I did not discuss, of inadvertent error in ballot design causing a significant number of voters to err in the same way.  The Butterfly Ballot used in Florida during the 2000 election is an example of this type of error.

If I want to convince other people that other explanations are insufficient to explain the discrepancies, I need to do a better job of it.  In the end, it is a subjective evaluation of the relative probabilities of the possible explanations.  I cannot prove which one is correct.  No one can.  I think the probability of these results being due to sampling bias is too low to sway my assessment that our voting machines are rigged.  Here is another attempt to communicate my reasoning as to why I feel that way.

The Tale of Toms, Dicks and Harrys.

Tom is my name for folks that voted for Trump, but didn’t want the family member or neighbor, who was filling out their own survey standing next to him/her, to know that.  Or maybe Tom is a trickster who delights in giving pollsters wrong answers.  Or maybe he/she just dislikes taking exit polls.  To see the results we did, we had many Toms that either lied to us about it, claiming Hillary instead , or just refused to fill out our survey.

Dick is my name for folks that voted for the Libertarian Candidate in the Senate and 4th Congressional Races, but didn’t want the family member or neighbor, who was filling out their own survey standing next to him/her, to know that.  Or maybe Dick is a trickster who delights in giving pollsters wrong answers.  Or maybe he/she just dislikes exit polls.  Dick either lied to us about it or refused to fill out our survey.

Harry is my name for folks that voted for the Miranda Allen, independent candidate in the 4th Congressional Race on a voting machine but didn’t want family member or neighbor, who was filling out their own survey standing next to him/her, to know that.  Or maybe Harry is a trickster who delights in giving pollsters wrong answers.  Or maybe he/she just dislikes taking exit polls.  Harry either lied to us about it or refused to fill out our survey.

Finally, we get to the judges.  We need two more sets of voters to explain the results for the judges as due to sampling bias.  I’ll call them Johns  and Joans.  Johns voted against all the judges while lying or refusing to take our survey and live in Wichita and Winfield but not Wellington.  Joans only live in Wellington and wanted to keep the judges while lying or refusing to take our survey.   Apparently, we have nearly twice as many Johns and Joans as we have Toms Dicks and Harrys.

What is the probability that all the Toms, Dicks, Harrys, Johns and Joans in S.E. Kansas are responsible for the bias in our exit poll results, rather than deliberate machine manipulation or rigging of the machines?  This is a valid question to ask.  We can examine our data and see which explanation is a better fit to the data.

There’s a stereotype of Libertarians as anti-social jerks. If this were accurate, it might be a reasonable alternative for the Libertarian results – a lot of Libertarians are Dicks.  On the other hand, how likely is that in Southeast Kansas, home of Koch Industries, that a few libertarians independently (or possibly even in cahoots) successfully hacked the voting machines here?

Why is Wellington devoid of Tom’s?

Why are Harrys found only in Sedgwick County and why do they disdain the use of Paper Ballots?  Is it more likely that a statistically significant percentage of Miranda Allen’s voters in Sedgwick County, but not Sumner or Cowley are Harrys? This pattern does fit the explanation of a “butterfly ballot” type problem, as it shows up in only one county and on only one type of voting equipment. It is possible that Sedgwick County officials inadvertently programmed the voting machines to somehow cause voters to accidentally indicate Miranda Allan rather than leaving the 4th congressional district race blank as they reported to us.  Or maybe Miranda Allen has a fan in Wichita possessing the wherewithal to successfully hack the voting machines in Sedgwick County?

Now consider the relative probabilities of the two alternative hypotheses.  What is the probability of all the Toms, Dicks, Harrys, Johns and Joans existing in the numbers required to produce the discrepancies we found in our survey results versus the probability that some nefarious and technically competent people were able to access voting equipment or software and made unauthorized changes to the software?

Here’s a recent opinion piece on that topic published in  Scientific American.  “Nevertheless, it has become clear that our voting system is vulnerable to attack by foreign powers, criminal groups, campaigns and even motivated amateurs.”

I will say that the probability that Libertarians have more than their fair share of Dicks everywhere is harder for me to reject than the existence of all the Johns and Joans.  But accepting that as a viable explanation also embodies some assumptions about the character of Libertarians I am loath to accept.

Until I see evidence that Libertarians actually have these traits in greater numbers, I assume that the tricksters, introverts and idiots are randomly distributed among the various political parties.  Other people can and do differ in their willingness to accept that assumption.

Now, this Tom Dick and Harry story won’t go into my paper.  It’s not stuffy enough for an academic journal.  I found writing it to be helpful in getting to a concise statement of why I feel sampling bias doesn’t work as a reasonable explanation for the exit poll results.  I hope my readers find it helpful in understanding my thinking as well.

In fact, writing it has allowed me to add the last bullet to the set of arguments I’m working on for my revised paper.  In Bullet form, here are my reasons for concluding that my exit poll results prove deliberate fraud and that sampling bias and inadvertent ballot or survey errors are not sufficient to explain the data.

 

Update on Exit Poll Results

On Feb 11th, I spoke with the Women for Kansas Cowley County (W4K-CC) Meeting.  We discussed the results of the exit poll they had run on Nov. 8th.

I discovered that the Cowley County Paper Ballot Official Results are not a apples-to-apples comparison as they are in Sedgwick County.  Those results are not suitable for inclusion in my analysis.

They are not the only dataset found to be unsuitable for inclusion.  I have removed that dataset from my upcoming peer-reviewed publication.   I have decided to leave my original blog post unchanged while updating my post discussing excluded data.

I understand why people don’t pay attention to statistics.  They can easily be twisted to yield any result desired by management.  That happened in Flint Michigan.

On the other hand, there are legitimate reasons to eliminate data when it is found to be unreliable.  The Cowley County are such an instance. The numbers given include main in ballots cast in those precincts.

Another reason I have chosen to leave the original graphs up is that they nice demonstrate the difference in pattern between a randomly introduced source of variation and a consistent bias which is evidence of fraud.

Cowley County results had me scratching my head.  The machine results showed trends similar to Wichita.  The paper ballots showed only large errors, but benefiting a random scattering amongst the candidates and races.  If you are interested in this sort of analytical details, feel free to go through the charts and decide for yourself.  I can’t rule out fraud for that dataset, but I don’t know what caused the deviations.  If it was fraud, it was either mercenary selling votes to any candidate or multiple agents working at cross purposes purhaps?  But given that the data collection limitations imposes greater variability which would result in the pattern of errors we see in those graphs, fraud is not be the most probable cause for those deviations.

Datasets are sometimes tainted by problems that have nothing to do with the question being asked but due solely to constraints on the data available.  There are limitations imposed by the methods of both the official results  and the exit poll survey.  I’m publishing ALL of the raw data, as well as as detailing what data is excluded and why.  Anyone who cares to may look at what is being left out as well as decide for themselves if the reasoning for the exclusion is sound.  With the exception of the Cowley County data, the other excluded datasets tend to support the fraud hypothesis.

How can you be sure that the voting machines in southeast Kansas were rigged?

How can I be so sure? Couldn’t there be some other cause of the bias?  That was the most common inquiry at my presentation Saturday, when I explained my exit poll results to the people who helped collect the data and had a vested interest in understanding the results.  I may have come across as a bit defensive in regard to this question.  I’m sorry if I did.  It’s hard to articulate the depth of my certainty, but I’ll try.

I carefully set up these exit polls to compare the official vote count by machine type.  The only legitimate concern regarding the meaning of these results is a biased sample. Not everybody tells the truth.  Some people delight in giving false answers to surveys.  How are you going to account for that? It’s a fair concern.

While I cannot prove that didn’t happen (at least, not without access to the ballots, which isn’t permitted), this is part of the normal error I expect.  It always helps to state assumptions explicitly.

INTROVERTS, LIARS, AND IDIOTS ASSUMPTION : THESE TRAITS ARE RANDOMLY DISTRIBUTED AMONG ALL CANDIDATES AND POLITICAL PARTIES.  I am assuming that that people who were less likely to participate (introverts) or more likely to fudge their answer (liars) or make mistakes (idiots) in filling out the survey did not differ in their response to our exit poll.

I received the following email that sums up this concern nicely and also suggests a couple of ways to check that hypothesis.

Hi Beth,

The observed discrepancies between official results and your poll results very clearly show that Clinton (D) voters were more strongly represented in those polled than in the official vote count; Trump (R) voters were less well represented.  There are  many possible explanations for this discrepancy.  One hypothesis is that a certain percentage of voters “held their nose and voted for X”  and would never have participated in the poll.  If these voters tended to be more of one party than the other, than that party would be less represented in the polls.   

Fortunately, your data provide a means to test this hypothesis about the “missing minority”, for it leads to this prediction:  
If a “missing minority” was biased towards X, then sites at which X had a greater percentage of the votes would be least affected by vote disparities.

A corollary prediction:  sites having the highest response rate would be least affected by vote disparities.

Have at it!
Annie

The main reason I find this hypothesis implausible is that the discrepancies for the Supreme Court judges were twice as large and followed the same pattern as the Pres. race discrepancies. There’s no reason to think more people ‘held their nose’ for judges than president!

Regarding those two predictions:

  1.  The sites with the greatest discrepancies were machine counts for SE Wichita, Urban  Wichita and Cowley.  The sites with the highest %Trump voters were Cowley, SW Wichita and Sumner.  No correlation there.
  2. The site with the lowest response rate, Sumner with 25%, also had the lowest discrepancies between the exit poll and the official results for the Pres. race.

In short, we do not see the other data relationships we would expect if the introverts liars and idiots assumption were false.  There is no reason to assume these individuals were more likely to vote for one candidate than another resulting in the bias in our data.

Analyzing Exit Poll Results

It’s important to state up front what data will be collected, what analysis to perform on that data and what constitutes evidence of a serious problem versus random errors in any poll of this nature.

Polling stations allow voters access to the results after the polls are closed when the votes have been tallied.  In Sedgwick County, they will have separate reports printed for the electronic voting machines and the scanned paper ballots.

We will also get a count of the number of provisional ballots collected from the polling location.  Theseballots will not be opened until the voter’s registration is verified and there will never be an official tally of the provision votes for polling location.  But we can look at the results we have for voters who submitted provisional ballots and compare them with the votes that were counted at the polling location.  If there are significant differences, this is evidence of the voter suppression effect of Kris Kobachs voter registration rules.

I have created a general data collection and analysis EXCEL spreadsheet.  Multiple precincts vote at each polling location and the results are reported for each precinct, not the polling location, so I’ve set up a spreadsheet to sum the numbers up and compute the appropriate probabilities.

I will be customizing this EXCEL file for each exit poll location in Kansas, but I am happy to share a general version of this worksheet with anyone who is interested in running an exit poll for their own area.  All you will have to do is input the official results and your exit poll results.    This is an example of of the output.

 

Example Data Analysis
Presidential race Chi-Squared Result:  NA
Candidates Exit Poll Official Results Binomial Probability
Clinton (D) 52 60 0.0638
Trump (R) 38 30 0.0530
Johnson (L) 6 6 0.5593
Stein (G) 2 2 0.5967
Other 2 2 0.5967
Total 100 100

There are two different analyses than can be used in this situation.  The chi-square test will give an exact probability that the actual results differed from what would be expected under the assumption of random chance.  EXCEL has this test as a built in function: CHISQ.TEST.  But the chi-squared test has minimum data requirements which were not met in this example, hence “NA” or Not Applicable as the result of this test.

Since the chi-squared test will not work for every set of possible data, I also show the individual binomial probabilities for each of the candidate.  The minimum probability from this set of five computations is a reasonable approximation to the exact computation using the extension of the binomial distribution and can be easily computed using built-in Excel formula BINOM.DIST.

How to interpret this:

We judge the probability of machine manipulation of the vote by evaluating the probability of our results assuming no manipulation of votes is occurring.  This is referred to as the “null hypothesis”.  All probabilities shown are made under this assumption.  If this probability is above 0.05 (5%), we can reasonably conclude that the differences between the machine vote share and the exit poll vote share are typical of random variation due to the normal errors in the process.

If this value lies between 0.05 and 0.001, raise an eyebrow and give the numbers for that race a little extra scrutiny and consider it in concert with the other exit polling results.

If this values lies below 0.001, that is evidence of fraud.  Personally, I would like to see a recount of any race with results that fall this far from normal.  But only a candidate can request a recount in Kansas.

In this example, I have contrived to show Trump with a questionably low # of votes in the official count compared to the exit poll results.  Hillary has a slightly elevated value.  But these results are not unexpected as the minimum probability of results this far off is above 0.05.

But if the other sites have similar values and they are all benefitting the same candidate, it would be concerning.  If 2 or 3 sites out of 5 show the same beneficiary of the differences, that’s reasonable.  But if 5 out of 5 sites show the same beneficiary, it’s evidence of rigging.

If we see multiple races with low odds and the same slate of candidates are benefiting, we have solid evidence of machine manipulation of our official votes.  If we see only the normal expected errors, then we have solid evidence it is NOT being manipulated.

While a single location and a single race might show evidence of manipulation, savvy cheaters will try to avoid this method of detection by establishing a maximum shift that falls beneath the 0.05 probability results.  But looking at multiple races and sites, we can establish whether even small shifts show evidence of cheating.

We can define a slate of candidates by party and check the probability of getting the results we got using a similar binomial analysis.  Under the null hypothesis of no manipulation, the probability of an error that benefits a candidate is 50%.   There are three races with candidates and five judges we are asking about, for a total of 8 results for each polling location.  Governor Brownback would like to see 4 of the 5 judges lose their jobs and replace them.  We can also presume he supports the Republican Party candidates for President, Senate and Representative.

We will have data from 5 different locations for a total of 40 random samples with approximately 50% probability.  (For example, let X be the number of errors that were the opposite of the Brownback administration preferred candidates. If we have 40 random samples as defined above, the probability of getting errors in the opposite direction of his preferred result is computed with the following excel formula:  BINOMDIST(X, 40, .5, 1)

If this value is extremely low (less than .001), we conclude that the Republican Party has unduly benefited and further investigation would be appropriate.

How to interpret the Provisional Ballot Data:

We cannot know the final count of the provisional ballots collected at a polling location.  They are polled at the county level and only those that are shown to be registered voters are opened and counted.   What we can do is compare the results of the provisional ballots with the other responses to our exit poll.   If there is a major difference between those asked to fill out provisional ballots with the automatically counted votes, we have a measure of the effect of the voter ID laws and if it made a difference to the outcome.

For each race, we can use the chi-square test if we have sufficient data.   Otherwise, we can use the binomial approximation similar to the one used to compare the official count to the exit poll survey results.

electioneering and instructs them regarding what they can and cannot do.  While permission is not required to run an exit poll, we do need permission from the property owner to set up a booth to collect our ballots and provide chairs and shade for our volunteers.  Mainly, we want everyone to know what we are doing to avoid any issues arising on on election day.

How to Run an Exit Poll Part 1

How to Run an Exit Poll Part 2

Creating an Exit Poll Ballot

Creating an Exit Poll Ballot

This is part 3 of my “How-to-Run-an-Exit-Poll-Series.

The exit poll survey ballot is important, but not complicated. The only question of interest, other than their ballot choices, is the method of voting.   Data will be available at the end of the day with separate totals for the machine cast votes and the scanned paper votes.  There will be no official count of provisional votes at this station, so we can only compare those votes to overall total for the polling station. But that comparison allows us to evaluate whether the giving people provisional votes amounts to a voter suppression tactic.

Since space on our survey form is at a premium, and because including that information makes their response less anonymous, I do not recommend including questions about age, race or gender.  Generally speaking, you want to keep the words to minimum.  (Not an easy task for me.)

Here is an example survey I have developed for exit polls in Sedgwick Co.  I included a short paragraph at the top because I feel it’s important to let people know why you want this information and reassure them that results are anonymous, just like their vote.

Sample Exit Poll Survey

The first question is really too long, but I wanted to be as clear as I can about this question.  In Sedgwick County Kansas there are three possible options:  A vote cast via electronic voting machine, a paper ballot that the voter feeds into a scanner for on-site electronic counting or a provisional ballot – a paper ballot that is sealed into an envelop to be counted later (maybe).

Asking about the specific races is straightforward.  State the office and then list the candidates.  Circling answers reduces the need for a blank or box to check.  It saves space on the page.

Staggering the answers for questions with more than one line of answers (ex: Pres) makes it easier to discern the voters intent.  When they are stacked one above another, the answer may easily become ambiguous.

Since a single polling location will have multiple precincts voting there, it’s a problem asking about races where different precincts will be voting  for different candidates.   Generally, I want to confine the questions to races that will appear on every ballot at the polling location.   On the other hand, my site managers for the SW Wichita location are very interested in the county commissioner races.  We arrived at the following:

Who did you vote for your County Commissioner Race? (Select one for District 2 OR  District 3)   –  sw-wichita-nov-8-exit-poll-ballot

I have hopes that we won’t get too many voters identifying their choices for both district 2 and district 3, but I expect we will get some.   OTOH, it’s the only question that would be spoiled and I’m reasonably comfortable in assuming that such mishaps are equally likely to occur regardless of which candidate they support.  I think we will get good data from this exit poll.

How to Run an Exit Poll Part 1

How to Run an Exit Poll Part 2