How to run an Election Exit Poll

I’m working to set up multiple exit polls here in Kansas in November.  I thought that people in other areas might be interested in setting one up themselves. You need just one dedicated individual along with a few additional volunteers working a few hours apiece can pull this off.  You may also need to expend from $50 to $150 on supplies like making copies of your surveys and refreshments to offer voters.

The dedicated individual is the exit poll site manager.  The additional volunteers only need to spend a few hours on election day soliciting voters to complete surveys.  Two people should be manning the exit poll booth at all times, just in case any emergency situations arise.  This post outlines what an exit poll site manager needs to do to run a successful exit poll.

The approach I recommend is called cluster sampling.  Each site provides an independent check on the accuracy of the official counts at that polling station. If we combine information from multiple sites, the data can provide excellent precision for determining whether the discrepancies found are reasonable and evenly distributed or if they show evidence of systemic bias, which would indicate problems with our votes having been counted accurately.

This approach means that you need to try to contact all voters at that location to request they complete an exit poll survey.  The reason for this is that the purpose of this exit poll is to validate the official results at that location.   It is not to make predictions prior to the close of polls.  It is not to analyze for demographic information afterwards.  It is to validate the official results.  By concentrating our efforts at relatively few polling stations, we can attain a higher level of confidence that the results of our survey are representative of the polling location.

The first thing to ask is method they voted.  In my location, there are three options – by machine, by a scanner paper ballot, or by provisional ballot.  At the end of the day, I get the counts of votes cast for each candidate by machine and my scanned paper ballot.  Provisional votes are not counted until a determination is made of whether that person was allowed to cast the ballot. The exit poll results for provisional ballots can be compared to the accepted ballots.  If significant deviations occur, that is a measure of the impact of voter suppression attempts, such as voter ID laws.

Before finalizing your design of the exit poll survey, you will need to select a location.  Contact your local elections office and get a list of all the polling locations.  Let them know you are planning a citizens exit poll and ask if they have any regulations or laws that would affect that.  In Sedgwick County, the only significant rule was that we could only approach people after they voted, not before.   There is also a law regarding distance for any electioneering, but as long as we only approach exiting voters, this would not be a concern.

Site Manager Duties:

The site manager is the point person for everything to do with an exit poll at polling place.  They will be there in the morning to set everything up and they will wait at the polling location to get the official results when the machines have finished printing out the records.  They will not need to be there the entire day, but they do need to be available if any problems arise.

Prior to Election Day:

  1. Select polling location:  Site managers need to consider the polling places available near their home, perhaps even scout the locations to make a determination about which they would prefer to exit poll.  You will need to contact the owner/manager of that location and inform them of what you will be doing.  Find out if they have any concerns and address them or refer them back to me.  Obtain written permission to be on their premises to conduct this survey when appropriate.
  2.  Finalize Survey:  While there will be three races applicable to all of Sedgwick County, because site managers decide what locations they will monitor, they have the option to add additional questions specific to their polling place such as state legislators or judges.
  3. Prepare Supplies:  The site manager will decide on and arrange for all supplies to be there.  Tables, Chairs, Refreshments, survey forms, ballot box, etc. Suggested Supply List
  4. Schedule volunteers:  We’ll need to meet together to accomplish this. I’ll keep a list of volunteers and we can discuss where volunteers are needed and when.  I’m also going to see if I can get some student help for the times before and after school, which is often the busy times at polling locations as well.

Election Day – It’s a long day, but this could be split up between two people, say a morning manager and an evening manager.   

  1. Set up:  The site manager arrives half an hour before the polling station opens.  They set up the exit poll booth, making sure everything is ready for the first voter of the day.
  2. Maintain:  The site manager is the person on call for any issues that arise.  But they should be close by, available to take care of whatever issue might arise.  Run out of survey forms?  The site manager will bring more.  Volunteer calls in that they can’t make it after all? The site manager either fills in or finds someone there who can.
  3. Close down:  The site manager will be responsible for securing the completed survey forms and counting them.  The site manager will ensure the booth area is cleaned up and all borrowed equipment is returned.
  4. Official Results:  The site manager will need to remain on the premises to collect the official results for that polling location.  Meet with election officials for your location sometime in the morning, letting them know you will doing this.  They should allow you to examine the results tape for yourself.  However, if they object, you can ask them to fill out your survey form for the machine results and the scanned paper ballots results from the printouts.  In addition, ask them to give you the total number of provisional ballots turned in for that location.

 

 

After Election Day:

  1. Count your results
  2. Publicly post both the official results and your exit poll results for your polling location or email me your results and I’ll post them on my site

 

Exit poll went well; no significant signs of election fraud.

With help from nearly a dozen volunteers, I conducted an exit poll on one polling location during this primary. It even made the local newspaper.  I am quite pleased with the results; everything went smoothly.

It was primarily meant to be trial run for the Nov. election, making sure that I will be able to collect the data necessary then to identify problems with our machine counts.  While some mistakes were made (all by me – the volunteers were fantastic!), I feel confident that we will be able to accomplish that task in Nov.

I know that many people are interested in the results of this survey.  Overall, things looked good.  There were a couple of yellow flags, but nothing I would recommend taking action on.

Data Collected: The primary question I asked was how the individual had voted, by machine, with a scanned paper ballot, or with a paper provisional ballot: Aug 2 Exit Poll Ballot

The exit poll was conducted at one polling location with survey responses being compared to the machine tabulated results at the polling location. Respondents were asked how they voted, by machine, or a scanned paper ballot or a provisional paper ballot. Results are shown below. Due to the small number of paper ballots, both scanned and provisional, analysis results are shown for the machine tallies and for the totals for the polling location, but not for the paper ballots separately.  The count of votes counted and survey collected is shown below in table 1.

Table 1:

Analysis Table 1

 

There is a discrepancy between the official count for provision ballots (1) and the exit poll count (3).  This is likely due to errors in marking the exit poll, so I am not concerned about this discrepancy.

There were an additional 47 surveys collected that were unusable due to problems that ranged from being completely blank to responses filled in for all races, both Dem and Rep

We asked about six races with two candidates, three races in each party.  However, only three of those races were applicable to everyone who voted at that location.  There were multiple (5) precincts voting at the polling location. Three of the races asked about were limited to voters in only one or two of those precincts.  As a result, survey takers could indicate a choice in those three races even if they did not actually vote on them.  For that reason, I have labeled the data collected on those three races  as ‘questionable’.   Caution should be used in drawing conclusions from the exit poll data for those races.

The results for the six races are as follows with the winners names bolded in table 2.

Table 2:

Analysis Table 2

Assuming that the official results were accurate, I computed the probability of our exit poll results using the binomial distribution.  I rated those results as being Green (looks good), Yellow (suspicious but not conclusive) or Red (definitely something wrong).  The usual threshold for statistical significance is below 5%. There were no red flags, but two of the six races got a yellow caution rating.   These results are shown in Table 3.

Table 3:

Analysis Table 3

Races that all survey respondents voted on were the U.S. Senate (Dem and Rep) and the U.S. Rep (Dem).  Results for the losing candidates are shown in Figure 1.

 

Figure 1:

Analysis Figure 2

The Senate Race for Dem candidates is given a yellow warning because the probability of the differences between the official results and our exit poll is only 3%.  This is not considered a red flag because we are making 12 different comparisons, which needs to be included in assessing the results. For example, if 12 comparisons are made using a 5% threshold, there is a 45.96% probability of at least one of them falling below that threshold by random chance.  There’s a whole set of statistical techniques designed to account for multiple comparisons if I wanted to get really precise about it.  In addition, while the official votes skewed towards Ms. Singh, she lost the statewide election so even if there was manipulation, it would not have affected the outcome of the race.

We had no method to identify what precinct people were in, so for the Kansas House and Senate races, survey takers could vote for someone who was not on their precinct’s ballot.  For this reason, the exit poll data must be considered questionable.   On the Republican side, since no precinct voted on both the house and senate races, the 38 surveys with both those races marked were not included in the totals for those two races.  Results for the losing candidates of these races are shown in Figure 2.

Figure 2:

Analysis Figure 1

The official results for Kansas Rep. Dist. 87 race get a yellow rating.  The results were skewed towards Mr. Alessi with only around a 1% chance of occurring by random chance.  This is not rated as red because the exit poll data was questionable.  However, since Mr. Alessi lost the election, even if there was manipulation, it would not have affected the outcome of the race.

A Replication of My Work.

Mr. Brian Amos, Ph.D. candidate at the University of Florida was dedicated enough to replicate some of my work and acknowledge that he gets the same results I reported.

He does have a few disagreements with my approach. For example, what he describes as a nitpick, I would respond with: That’s a feature, not a bug! My choice of limiting an analysis to the precincts with more than 500 votes cast results in what he considers an overemphasis on the effect I’m am concerned with. This is absolutely true. That particular analysis was designed to draw out that effect and make it more apparent. The vote share data is very noisy and impacted by many different factors. The trend is real, but is easily missed in the inherent noise of the larger dataset.

Wichita 2014 Election Results
Wichita 2014 Election Results

Mr. Ames wonders if some other, correlated factor such as the voter registration numbers, would display a similar trend in the cumulative chart. He shows this is true for the share of Republicans in this particular data set. But this is not a universally correlated trait across the different states where such trends have been found, and it was not enough in Sedgwick County Kansas to account for the difference in vote share.

I discuss this factor at more length in my recently published paper “Audits of Paper Records to Verify Electronic Voting Machine Tabulated Results” in the Summer 2016 issue of The Kansas Journal of Law and Public Policy. The graph displayed above is from that paper, illustrating that although there is an upswing the cumulative graph for share of Republicans, it is much smaller than the upward surge of the vote share for various republican candidates in 2014.

His parting comment “While the charts may be explainable through vote fraud, there are other, perfectly innocuous explanations that can be put forward, as well.” is true. Yes, there are other possible and innocuous explanations. Statistical analysis only illuminates correlations and other relationships. Further investigation is needed to determine cause. Just because the trend is a predicted sign of election fraud does not mean election fraud occurred.

The only way to tell if our machine tabulated vote count is accurate or undermined is to conduct a proper audit. That’s never been done here in Sedgwick County. I’ve requested access to do this as a voter and been denied. I filed the proper paperwork in a timely manner asking for a recount of those records after the 2014 election and was denied. I’ve sued for access as an academic researcher and been denied.

Why should I trust a vote count that our officials will not allow to be publicly verified? Why should anyone?

Voting Equipment on the Agenda: The Sedgwick County Bid Board Meeting July 7, 2016

After hearing nothing from the elections office regarding the purchase of new voting equipment since their demonstration of the equipment in February, yesterday afternoon I got notice of a bid board meeting this morning at 10:00. As it happened, I was able to take time off work and attend.

The Voting System RFP (15-0078) was only agenda item. It was perfunctory except for my presence – I asked inappropriate questions. Despite my previous requests to the elections office, I was never notified of any of committee meetings when the responses to the RFP were evaluated. I did not know that they were recommending Election Systems & Software until I attended this meeting. There was a handout listing the various vendors and costs associated with them with and their indicated which they recommended.

One commissioner asked about why not “Everyone Counts, Inc.” which had a lower total cost indicated on the information hand-out they had provided. Tabitha answered saying that system did not meet their basic security requirements and was eliminated for that reason. Their final recommendation was based on an overall score, but neither the scoring criteria nor the relative scores of the competing company were included in the handout.

She was asked about the numbers on the handout, which were not self-explanatory with regard to the totals. Additional information on the number of machines to be purchased was needed to make sense of the totals. She mentioned that one reason they choice ES&S was they willing to buy back our current voting machines. Otherwise, the elections office would have to pay to have them removed because there are security concerns regarding their disposal.

I asked what the cost of using voter marked hand counted paper ballots would be. Ms. Lehman laughed at my question and said she had no idea, they had not bothered to even compute the cost for a comparison. [I had requested that she consider that option when I wrote her months ago offering my services and expertise for that committee.] She indicated that she did not consider hand counting acceptable due to how long it would take, saying that CA was still counting their primary from June 7th. [The Brexit vote was hand counted using voter marked paper ballots and results were available by the next morning.]

I then asked Ms. Lehman why I wasn’t informed of the committee meeting where the recommendations were decided. I was told that was an inappropriate place to ask such questions. I could ask next Wed. at County Commissioner meeting. Personally, I don’t think that would be an appropriate venue either and not the best use of whatever time I will be allowed. I’ll just conclude that she didn’t want me on her committee and was not required to allow the public to attend those meetings.

A commissioner then complimented her on the analysis and the combined expertise of people on committee. [There are no members of that committee listed as having expertise in Quality Assurance, one of my specialties.] The meeting was over at 10:15. The recommendation was accepted and it will be on the agenda for the next county commissioner meeting. I will be there. I’ve requested a copy of the scoring criteria and results. Hopefully, I will be able to ask better questions at the meeting next week.

I would encourage anyone in Wichita free next Wednesday to attend the meeting and let our county commissioners know how dissatisfied we are with our current equipment and how concerned we are about the security and reliability of the proposed new voting equipment.

Another Analysis of 2016 Democratic Primary

This is a solid analysis. I say this without having vetted their data collection, I’m assuming they did that part right. If so, the conclusion is obvious. They authors confine all analysis to the appendix, so you can read the paper without having to understand any math.

Are we witnessing a dishonest election?

They found Sanders won 51% to 49% in places that had a paper trail. They found Clinton wins 65% to 35% in places that don’t. That’s amazing! Yes, those are different states. Yes, they looked at a different possible causes They tested for that difference while accounting for the % whites and the ‘blueness’ of the state. No, they didn’t find anything sufficient to explain that difference.

You don’t have to be a statistician to understand that’s a huge difference in proportion. It helps to be a statistician to understand the tests they ran checking other explanations and the resulting output. They are running appropriate tests and the output is unequivocal. Which they stated. I concur.

“As such, as a whole, these data suggest that election fraud is occurring in the 2016 Democratic Party Presidential Primary election. This fraud has overwhelmingly benefited Secretary Clinton at the expense of Senator Sanders.”

Redacted tonight makes this article their lead story.

BTW, I absolutely loved their fake commercial for “Shut your f***ing tweethole” at the 15 min mark.

Authors response to criticisms

My work, some of my graphs and my previous post, are included in the appendix of the response article. Lots of interesting graphs there too.