By Preston Schmitt ’14
Katherine Cramer ’94 walks into a small western Wisconsin diner expecting to meet some regulars over coffee. She’s led through a curtain in the back to a discreet, L-shaped table, where a group of locals gathers every morning to bet on a game of dice.
She’s an outsider by any measure — a younger political scientist at UW–Madison hoping to shoot the breeze with a klatch of older rural men. She disarms their skepticism with her familiar Wisconsin accent and knowledge of the local dice game.
Cramer is here to listen as they discuss their political views. One of the first topics is her employer, the state’s flagship university.
“The ones who end up going to the UW, going to the top-tier schools outside the state, usually have parents who [are] educated and know what the game is to be played. … Their parents are probably graduates and have probably really nice jobs,” says one of the men, with the others nodding. “These are poor people up here.”
From 2007 to 2012, as Cramer met regularly with this group and 26 others around the state, she came to recognize the resentment in that response. Rural distrust of urbanites and so-called elites bled beyond higher education into nearly every issue. As one of the dice players concisely put it: “I think you’ve forgotten rural America.”
And in 2016, election polls did.
Wisconsin, which had voted for a Democrat for president every election since 1988, swung for Republican nominee Donald Trump in 2016 — to the shock of pollsters, pundits, and the public alike. The main reason? Rural voters carried the state for Trump by a 27-point margin, according to exit polls. Eight years prior, Obama had won them over by nearly 10 points.
Cramer’s unconventional fieldwork informed The Politics of Resentment: Rural Consciousness in Wisconsin and the Rise of Scott Walker, which she published in March 2016. Just months before the election, Cramer’s book diagnosed precisely what polling in key swing states would miss come November — a rural rebuke of the status quo.
“Tapping into emotion is not about an issue or even about partisanship. It’s about people’s sense of who they are in the world,” Cramer says. “I think it came out of the blue to pollsters because they weren’t necessarily asking about these latent feelings — people’s sense of distribution of resources, or respect, or shared values.”
As the polling industry applies lessons from 2016 and looks to regain public trust, UW–Madison is diving directly into the discussion. Earlier this year, the Elections Research Center launched a battleground poll in Wisconsin, Michigan, and Pennsylvania, hoping to identify and quantify trends like the one Cramer discovered.
At stake? An essential tool of democracy.
What Went Wrong in 2016?
On the night of the 2016 election, as surprise results rolled in, the right delighted in what it saw as a massive failure by the polling establishment. The left felt betrayed by it. But both sides have appeared united in their distrust of polling since.
“No one anticipated that 2016 would be so consequential for the polling field,” says Courtney Kennedy, director of survey research at the Pew Research Center.
Sensing a pivotal moment for the industry, the American Association for Public Opinion Research (AAPOR) asked a national committee of pollsters to sift through the data and figure out what went wrong. Chaired by Kennedy, the committee began with a controversial question: did the polls really fail?
The answer is complicated.
Collectively, national polls accurately estimated the popular vote. They showed Democratic nominee Hillary Clinton with a three-point lead, and she won it by two points — one of the most precise margins since the advent of modern presidential polling.
Likewise, most state-level polls correctly indicated a competitive race for Electoral College delegates, which would actually determine the outcome. Before the election, RealClearPolitics averaged state polling and reported a nearly even count of delegates. “The polls on average indicated that Trump was one state away from winning the election,” stated the postmortem report that AAPOR published in 2017.
What some polls missed was the increase in support for Trump in northern swing states, where he completed an unlikely sweep of Wisconsin, Michigan, and Pennsylvania.
The final Marquette University Law Poll found 46 percent support for Clinton and 40 percent for Trump among likely voters in Wisconsin. Polls told a similar tale in Michigan and Pennsylvania. Polling aggregators and political pundits put the likelihood of a Clinton victory between 70 and 99 percent, viewing those three states as part of her “Blue Wall” of support. Fewer than 80,000 voters combined — less than one percent in each of those states — toppled it.
“Most of the models underestimated the extent to which polling errors were correlated from state to state,” wrote Nate Silver, founder of the polling aggregator website FiveThirtyEight, following the election. “If Clinton were going to underperform her polls in Pennsylvania, for instance, she was also likely to do so in demographically similar states such as Wisconsin and Michigan.”
The Elections Research Center launched a battleground poll in Wisconsin, Michigan, and Pennsylvania to identify key trends.
The AAPOR committee chased every popular theory for what went wrong with polling in those states — including the “shy Trump voter” effect, the belief that some of his voters were reluctant to reveal their intention. For that one, it found no significant evidence.
What the committee did find was an unprecedented change in voter preference late in the campaign. Roughly 15 percent of voters in Wisconsin, Michigan, and Pennsylvania waited to make a decision until the final week of the race, when most polls were no longer actively in the field.
“Normally, those late deciders aren’t much of a factor,” Kennedy says. “Historically, they break about evenly between the two major-party candidates, so it kind of washes out. In 2016, in those battleground states, they broke for Trump by wide margins — 15 to 20 percent.”
Such a divergence would have been difficult for pollsters to anticipate, but the committee did identify a significant failing. Many polls, especially at the state level, neglected to adjust for an overrepresentation of college graduates in their samples. And those voters turned out strongly for Clinton.
“People with higher levels of formal education tend to be more likely to take polls,” Kennedy says. “And we’ve known this, frankly, for a long time.”
In elections past, polls could get away with not weighting for education, assuming they accurately adjusted for more predictive demographics like gender, age, and race. But that’s because Democrats successfully courted some white working-class voters. Not long ago, especially among union workers, they were a key constituency of the Democratic base. “And in 2016, Trump turned that on its head,” Kennedy says.
In a way, election polling’s reckoning was inevitable. Local news organizations have historically led or cosponsored polling efforts. And the quantity and quality of state-level polling has followed the trajectory of declining media budgets.
“It is a persistent frustration within polling and the larger survey research community that the profession is judged based on how these often-underbudgeted state polls perform relative to the election outcome,” the AAPOR report concluded. “The industry cannot realistically change how it is judged, but it can make an improvement to the polling landscape.”
And UW–Madison answered the call.
The Battleground Poll
“Wisconsin is the quintessential battleground state,” says Barry Burden, director of the UW’s Elections Research Center and professor of political science. “Almost magically, the forces that would help Republicans and the forces that would help Democrats seem to be in a perpetual balance.”
Most notably, while urban areas like Madison and Milwaukee are turning out strongly for Democrats, rural voters in the northern and western parts of the state are gravitating to Republicans.
And yet, aside from the highly regarded Marquette poll, the state lacks consistent public-opinion surveys to capture such trends.
UW–Madison was uniquely positioned to fill this polling gap. In 2015, the Department of Political Science established the Elections Research Center to centralize the research of more than a dozen faculty members across disciplines.
Many of the center’s experts have earned national reputations, appearing regularly in the New York Times, the Wall Street Journal, PolitiFact, and other media. Burden alone fields as many as 10 requests from journalists per day, with recent topics ranging from caucus-reporting technology, to diversity on the presidential ticket, to mail-in voting, to the electoral consequences of a pandemic.
The experts are called to testify in court and legislative bodies about their research. The Washington State Senate, for instance, asked Burden to share his findings on the effects of same-day voter registration, which the center had studied in Wisconsin.
To reach the broader public, they post to Twitter during debates, election nights, and other major political events. “[The work] requires a level of vigilance to keep up with current events,” Burden says. “One of the things my colleagues and I can offer is putting these events in context of research and broader patterns.”
The Elections Research Center partners with the data company YouGov and media entities to conduct public-opinion surveys. One survey made headlines during the 2018 Senate hearing on sexual assault allegations against Supreme Court nominee Brett Kavanaugh. UW researchers recycled questions from a survey in 1991, when Justice Clarence Thomas faced sexual-misconduct allegations. They found almost no movement in public opinion on whether the respective nominations should go through if the allegations were true, with around 70 percent of respondents in both surveys saying it should not. (However, respondents in 2018 indicated they were much more likely to vote against a senator if he or she supported the nominee.)
Encouraged by past efforts, the center launched the 2020 battleground poll, immediately contributing to the political discussion. Its first poll results in February dispelled a pervasive cable-news narrative. Throughout the 2020 Democratic presidential primary, the networks used exit-polling data to demonstrate that the top motivation for Democratic voters was “electability” — whoever had the best chance to beat Trump.
But Burden wondered whether that was the result of Democratic voters being faced with a false dichotomy: do you care about the issues or beating Trump? When the UW’s poll offered more options to respondents, it found that 37 percent predominantly supported a primary candidate because of key issues, 22 percent because of his or her chances to beat Trump, and 20 percent because he or she was the most qualified to be president.
And to no one’s surprise: the poll also showed a close race between Trump and several of the then–Democratic candidates in Wisconsin, Pennsylvania, and Michigan. The center plans to conduct several more surveys before November’s election.
The battleground poll continues a legacy of elections research at the university, though conditions have vastly changed over the past two decades.
The Landline Era
From 1998 to 2008, UW political scientists ran the Wisconsin Advertising Project, which analyzed presidential and statewide campaign ads aired on TV in more than 200 media markets. It pulled back the curtain on how candidates communicated with voters.
Around the same time, the UW Survey Center administered the Badger Poll to measure statewide public opinion. Cramer, who’s now an affiliate of the Elections Research Center, served as the faculty director of the Badger Poll. She started to notice overlapping trends in polling results and her fieldwork. “Many people thought that rural areas of the state didn’t get their fair share of state taxpayer dollars,” she says.
The Badger Poll’s methodology was to sample some 500 Wisconsin residents by randomly selecting households with active landlines. It mailed notices to households, redialed numbers as many as 10 times, and intensively trained interviewers to keep potential respondents on the line. Those efforts helped the Badger Poll, which ran 32 surveys between 2002 and 2011, regularly achieve a response rate of roughly 40 percent.
Now, such a response is a fantasy. If you’re willing to answer a call from an unknown number, you’re part of the 1 to 2 percent response rate that even high-quality phone surveys strive for today.
For as long as polling has existed, technology has uprooted methodology. And with the public becoming more difficult — and costly — to reach, pollsters are getting creative. The UW’s polling partner, YouGov, is an entirely online operation that’s finding ways to forgo representative sampling without sacrificing quality.
The Brave New World of Polling
By 1936, the Literary Digest had accurately predicted the winner of every presidential election of the past 20 years. That year, it embarked on one of the most ambitious polling operations of all time, mailing out mock ballots to 10 million people.
“When the last figure has been totted and checked, if past experience is a criterion, the country will know within a fraction of 1 percent the actual popular vote,” the Digest declared.
The magazine’s straw poll predicted that Franklin D. Roosevelt would lose in a landslide to Alf Landon and win just 43 percent of the vote. He won 61 percent.
That cataclysmic error — the result of selection bias with a disproportionately affluent mailing list as well as a high rate of nonresponses — ushered in the modern era of scientific public-opinion polling. George Gallup and Elmo Roper, with much smaller but more representative samples, both predicted Roosevelt’s reelection and went on to found polling enterprises that remain active today. (The Digest folded 18 months later.)
Collecting public opinion can serve a critical function of democracy.
For some 50 years, representative sampling easily differentiated the good polls from the bad. But as communications technology has evolved, the craft of high-quality polling has become much more complex. Whenever a new, pricier technology emerges — landlines, cellphones, the internet — polling must adapt to it slowly or risk skewed samples because of lack of widespread access. The internet has posed another problem: while pollsters can access lists of every mailing address or phone number, there’s no master list of emails, making it impossible to recruit a representative sample digitally.
The UW’s polling uses an opt-in panel of people whom YouGov has recruited online and compensated for participation in other surveys (such as product testing for companies). While it’s not representative sampling, YouGov collects a wide range of personal information from its participants, which it uses to accurately weight the sample to the larger population. YouGov’s polls have called nearly 90 percent of recent political races correctly, according to FiveThirtyEight.
YouGov has learned to innovatively adjust for variables beyond demographics, including political ideology, volunteerism, and news consumption. Such measures have helped it address polling’s familiar foe: the overrepresentation of people with college degrees.
“This is not a fly-by-night internet operation,” Burden says. “They are staffed by a lot of academics and political scientists who understand the scientific goals we have.”
High-quality polls seem to be navigating this new landscape effectively. The 2017–19 election cycle was one of the most accurate on record for polling, according to FiveThirtyEight, with the lowest average error margins since 2003–04.
So where does that leave us for 2020?
Trust Polling to Do What?
As I talked to the experts in this story, I started with the same question: “Why is polling important?”
In March, I read Jill Lepore’s New Yorker article “The Problems Inherent in Political Polling.” She argued that polling, like endeavoring to gauge public opinion via social media, is flawed. “Democracy requires participation, deliberation, representation, and leadership — the actual things, not their simulation,” she wrote.
I, too, had become a skeptic.
But the experts reminded me that — beyond the horse-race polling numbers that tend to populate the headlines — collecting public opinion can serve a critical function of democracy.
“If you think about a society where there’s no polling, the leader of that country can just go out and say, ‘Well, I know the public feels this way,’ ” Kennedy says. “Contrast that with a society where there are a lot of independent pollsters and all of their data converge, so that, ‘Oh, actually, the public has this other attitude.’ It provides a reality check for people in office.”
According to Burden, surveys can shed light on an election beyond the question of who voted for whom. “Public opinion polling asks voters directly, ‘What are you thinking? What matters to you? What are your positions on the issues? What would you like to see government do? What is your ideal vision of society?’ ”
I had also lost sight of the golden rule of polling: that the results represent a snapshot in time, not a prediction. For pollsters, it’s a clichéd disclaimer. But for a public and media eager to speculate on who will win, misinterpreting polls as predictions can lead to false expectations and big surprises.
Through high-quality polling, insightful public-opinion analysis, and research-based commentary, UW experts are demonstrating a responsible way forward for our political dialogue. They’re uncovering how voters feel, not projecting what they will do. Cramer discovered rural resentment the old-fashioned way: “polling by walking around.”
As for the question everyone is asking — “Should I trust the polls after 2016?” — I saved that one for last.
“My response is, ‘Trust polling to do what?’ ” Kennedy says. “Are you asking me if you can trust polling to call the winner in a close election? I would say no. It’s not up to that task. But if the question is, ‘Can I trust polling to tell me how the public feels about Donald Trump? About [House Speaker] Nancy Pelosi? About health care? About the response to the COVID-19 pandemic?’
“Yes, polling is absolutely up to that challenge.”