President Trumpâs voter fraud commission has the stated goal of ensuring the integrity of the vote as âthe foundation of our democracy.” But, like the buried foundations of a building, who votes and how they vote arenât easy things to examine.
In alleging that thereâs widespread voter fraud, commission Vice Chair Kris Kobach has relied on proxies, such as the indirect measure of matching up names in voter registries to identify people registered in more than one state. In the lead-up to the commissionâs second meeting last week, he also railed against thousands of New Hampshire voters who registered using out-of-state licenses — which he claimed proved that people were hopping state borders to illegally swing elections.
The experts I spoke with said those metrics donât really measure the existence or risk of illegal voting. In fact, they said, itâs probably impossible to conclusively prove or disprove allegations of widespread illegal voting — though they pointed out that very few cases have ever been found and prosecuted, even as Kobach is aggressively seeking them out to prove his hypothesis of rampant voter fraud.
When Kobach employs these proxies as proof of voter fraud, though, he is implicitly suggesting that changes need to be made to the voting system to protect its integrity, such as ensuring that the same name never turns up on multiple registries and voters never use out-of-state licenses at the polls. But those irregularities exist because of the fundamental American values the commission is dedicated to protecting: You canât easily and swiftly clean up registry errors without disenfranchising millions of voters. And you canât set up a uniform, nationalized voter registry in a country whose founding values are based on limited federal control.
The problem with proxies is that they do more to demonstrate the complex nature of American values than they do to prove our elections are rigged.
If Kobach were simply claiming that voter registries are messy — full of errors and inaccuracies — heâd be correct. Research published by the Pew Center on the States in 2012 estimated that 24 million registration records (13 percent of all the registrations in the country) contained information that was likely inaccurate — names that had changed, addresses that were no longer up-to-date, people who had died, simple typos. And double-registered voters — a favorite target of Kobachâs — reached nearly 3 million. Likewise, heâs also right that people do sometimes vote in states where they arenât officially residents. Thatâs particularly true of college students, who might spend most of their time in a place they donât technically live. Depending on local laws, those students can use out-of-state licenses to prove their identities at the ballot box.
But experts say that neither of these proxies is particularly good evidence of illegal voting. Primarily, thatâs because both things are 100 percent legal and exist for reasons that have nothing to do with fraud. Take double registration, for instance: When Americans are double registered, itâs usually because theyâve moved and their names were never cleared out of the system in their previous state of residence.
We did a quick survey of FiveThirtyEight staffers by checking voter registration rolls in the states theyâve lived in over the past 15 years. Out of 15 people who participated, five were double-registered. Iâm one of them, with active voter registrations in Minnesota, where I live, and Alabama, a state I last lived in in 2006. Three staffers were only registered in states they no longer live in. One person wasnât registered anywhere, much to his surprise. Bottom line: Americans donât stay in one place forever, and bureaucracy doesnât always keep up with us.
Then thereâs the specter of out-of-state voters. Kobach claimed that more than 5,000 people had come to New Hampshire from other states to vote in (and try to change the outcome of) the November election. His proof was a list of people who had taken advantage of New Hampshireâs same-day registration laws, had used out-of-state driverâs licenses to verify their identities and had not later applied for New Hampshire licenses or vehicle registrations. Kobach has received plenty of pushback on the idea that this meant they werenât legitimate Granite State voters, including from other members of the commission during last weekâs meeting. Thatâs because itâs likely that many of those people whom he called fraudulent voters were actually college students voting in New Hampshire because thatâs where they spent most of their time and where they were living when Election Day rolled around. The Washington Post found several individuals who attested to having done just that, and the cities with the highest number of out-of-state-license voters were college towns.
Just because these practices donât prove voter fraud, though, doesnât mean they arenât confusing and even at times problematic. Itâs certainly not ideal to have voter registries loaded with the âdead woodâ of misspelled names and people whoâve left the state, said Charles Stewart, professor of political science at MIT. Those errors can prevent people from voting if, say, their current address and registry address donât match. People in that situation could be turned away or forced to file provisional ballots.
And Stewart said he believes they suggest deeper administrative problems — especially when the state doesnât know exactly how many errors its voter rolls contain. âWhat if a school said, âWe donât know how many people graduatedâ? Weâd be really suspicious of public officials that had sloppy reporting,â Stewart said. âItâs generally good public policy to have good records.â
Thatâs why states go through the process of cleaning up voter registration rolls — removing the dead and the people who have left the state to try to maintain an accurate count of voters. But hereâs where American values conflict with clean database management: You canât just unceremoniously purge people from the records because they havenât voted in a while or because they appear to be registered in another state, said Walter Mebane, professor of political science and statistics at the University of Michigan.
The National Voter Registration Act prevents states from doing just that because itâs likely to end up illegally stripping people of their right to vote.1 States have to go through a process of trying to match voter registry records to other kinds of data and alerting voters if it looks like they should be removed. Thereâs no uniform procedure for this, and the quality of registry maintenance (and election administration in general) varies widely from state to state. The courts are still hashing out what is and isnât appropriate. For instance, the Supreme Court will hear arguments in November in a case on Ohioâs registry maintenance methodology, which purged voters from the rolls if they hadnât voted in six years.
You could fix the problem — and probably make it easier to see if people have truly double-voted, not just double-registered — by having a single national voter registry, Mebane told me. âBut thereâs no reason to worry about that because it would never happen,â he said, explaining that it be would anathema to our national values.
Those values strongly favor local control of elections, even when itâs not the most efficient choice. It dates back to the beginnings of the country, when county officials tallied in-person voice votes from citizens who didnât need to be registered at all. As things like the secret ballot and voter registration were added into the mix, cities, counties and states came up with different ways to handle the new complications, collect the records and administer the elections. Today, elections are governed by states, but a lot of the nuts-and-bolts management still happens at the city or county level — often in ways that vary from one town to another. And shifting away from that diverse local control probably wouldnât be terribly popular, given that Americansâ confidence in election results and fair handling of votes decreases as the level of administration moves further from where they live.
The same is true with out-of-state voting: You can simplify the system, but that would conflict with other values. Courts have repeatedly said students can vote where they study. âNobody can lose their right to vote because of issues with residency as a student,â said Marc Meredith, professor of political science at the University of Pennsylvania — something that would be likely to happen if students were forced to travel back to their home states on Election Day in the middle of their fall semesters.
But Americans are generally less supportive of students voting outside their home states than we are of other 20th-century voting reforms, Stewart said. âThereâs a sizeable number of people in the public who just believe that college students should vote where their parents live.â
He based that on the unpublished results of questions he asked in the Cooperative Congressional Elections Study in 2013. Although most Americans — 65 percent — said expanding where students could vote improved elections, respondents were less supportive of that than they were about other kinds of reforms — like extending the vote to women.
In other words, Americans are both suspicious of thousands of people from âsomeplace elseâ tipping an election and have also set up the legal system to support expansion and protection of the right to vote, even for people who are, technically, from someplace else. The result is a jumble of laws that make the ability of college students to vote — and what forms of ID and documentation they have to bring with them to the polls — vary unpredictably from state to state, even county to county. Even someone like Kobach — a state election official who has made his national career on issues surrounding election transparency — canât be expected to know what is legal and what isnât nationwide, experts told me. Thereâs just too much diversity.
But the data mess explains why itâs difficult to make a case around voter fraud from either side. Just because a situation isnât ideal doesnât mean itâs proof of illegal voting. Instead, Meredith said, he wishes Kobach and the commission would focus on finding better ways to systematically study voting — ways that line up with both the needs of researchers and American values. âYour hope would be thatâs what a voter integrity commission would be,â he said. âRather than jumping to conclusions on the basis of proxies that may or may not have validity.â
Poll of the week
Republicans in the U.S. Senate have just over a week, until Sept. 30, to pass an Obamacare repeal bill with a bare majority (instead of 60 votes). But in the rush of whip counts and CBO scores, donât forget: This is an incredibly dangerous debate for Republicans. The public, through a variety of poll results, has made plain that it doesnât like what the GOP is doing.
The latest YouGov poll, for example, found that 38 percent of respondents picked Democrats as the party that would do âa better job handling the problem of health careâ; 24 percent picked Republicans. The Affordable Care Act, meanwhile, has a positive net favorable rating, and the various GOP repeal-and-replace bills have generally polled terribly.
President Trump should also be worried about an unpopular health care bill passing. His overall job approval rating has climbed in recent weeks as news networks have been focused on hurricanes, but his approval rating has tended to decline when Americans are more focused on the health care debate. Trump himself has an approval rating of just 27 percent on the issue of health care, according to the latest NBC/Wall Street Journal survey.
So why are Trump and congressional Republicans barreling on anyway? Republican voters want them to. According to a Politico/Harvard T.H. Chan School of Public Health poll, 53 percent of Republicans said repealing and replacing Obamacare was an âextremely important priorityâ for them. That 53 percent was higher than it was for any other issue polled.2 Lowering taxes, which Republicans are also gearing up to do, was rated as extremely important by just 34 percent of Republicans.
The question therefore for Republicans is whether they want to pass a bill and upset the electorate at large or leave a seven-year promise to repeal Obamacare unfulfilled and upset their base. Neither option is all that appealing politically.
Other polling nuggets
- It’s close in Virginia — Democrats were perhaps hoping that Trump’s unpopularity would allow Ralph Northam to run away with the Virginia governor’s race. It hasn’t happened. In an average of five surveys conducted this month, Northam is nursing a 45 percent to 41 percent lead over Republican Ed Gillespie. Northam may have more room to grow because African-Americans, who overwhelming vote Democratic, tend to make up a disproportionate share of undecideds in these polls. But also remember that the link between how voters feel about a president and how they vote for governor isn’t as strong as you might think.
- How students understand free speech — UCLA Professor John Villasenor published a poll this week in which college students offered their opinions on free speech. Among the findings: A plurality of students said the First Amendment does not protect hate speech (44 percent to 39 percent). A slim majority said it is OK for students to shout down a guest speaker (51 percent to 49 percent). And finally, 19 percent of all students (and 30 percent of male students) said it was OK for students to use violence to prevent someone from speaking. I highly suggest reading the entire poll.
- Moore remains ahead in Alabama — The Alabama Republican primary runoff is Tuesday, and the GOP establishment should be worried. Firebrand conservative Roy Moore led Sen. Luther Strange in two polls released this week — 53 percent to 47 percent in a Strategy Research poll and 50 percent to 42 percent in a JMC Analytics poll. Still, Mooreâs 8-point margin in the latter poll is down from 19 points the last time JMC Analytics surveyed the race. Put another way: Moore is the favorite, but don’t be shocked if Strange pulls it out.
- Bill de Blasio is cruising to re-election — After New York Mayor Bill de Blasio captured nearly 75 percent of the Democratic primary vote last week, a new Marist College poll suggests that he may come close to that percentage in Novemberâs general election. De Blasio was ahead 65 percent to 18 percent over Republican Nicole Malliotakis. Perhaps that shouldn’t be too surprising given the heavy Democratic registration edge in New York City. Remember, though, that New York didn’t elect a Democratic mayor in any of the five elections before de Blasio won in 2013.
Trump’s job approval ratings
Trump’s job approval rating is 39.5 percent. His disapproval rating is 53.6. Both of those are improvements for Trump over last week’s 38.5 percent to 55.6 percent spread, and they continue a longer-term positive trend for the president. Just last month, his approval rating was below 37 percent, and his disapproval rating was above 57 percent. The timing of Trump’s improved numbers lines up pretty well with Hurricane Harvey making landfall in the U.S.
The generic ballot
Democrats are ahead of Republicans 46.4 percent to 38.6 percent on the generic congressional ballot. That’s a slight improvement for Republicans from last week when they were down 45.5 percent to 36.0 percent.
Before the Super Bowl in February, we published a fairly comprehensive guide for when to go for 2, simplified into one slightly complicated (but very easy to use once you get the hang of it!) chart. In addition to hopefully demystifying how to judge a lot of borderline situations, we identified some fairly clear-cut cases in which NFL coaches should choose to go for 2 but donât. Ever.
My hope, of course, was that teams would read this (or figure it out on their own) and that weâd see an immediate and cataclysmic shift in 2-point strategy — like going for it when down 4, 8, or 11 after scoring a touchdown late (which are not only real cases, but ones that are usually clear-cut and significant). But, alas, no such luck.
The logic is pretty simple: If you can estimate your teamâs chances of winning with an X point lead/deficit (X points being how many points you are up or down following a touchdown) and your chances of winning with X+1 and X+2, the decision follows from simple arithmetic. In fact, given that 2-point attempts and extra-point attempts taken from the 15-yard line (under the new rules implemented in 2015) now have roughly the same expected point value (both around 0.95 points), the choice is easier than ever. Simply calculate (or estimate):
- The improvement in win percentage if your point margin changed from X to X+1.
- The improvement in win percentage if your point margin changed from X+1 to X+2.
If the first number is greater, kick the extra point. If the second is, go for 2.
Now, you can estimate or intuit these differences on your own on the fly, or you can use a fancy win probability model like we have,3 but the logic is the same.
Of course, weâve taken it a bit further — our chart uses multiple sets of assumptions to create a range for each scenario covering teams that are relatively better or worse at 2-point conversions than our baseline. In case you missed it, hereâs the chart:4
A quick note on reading this chart: It may look a little âloud,â but thatâs a feature for looking up scenarios lightning-fast. For a quick approximation, you first look at the minichart corresponding to the point spread (after the touchdown). If the quarter youâre in is shaded bright purple, you probably want to kick; if itâs bright orange, you should probably go for it. If youâre in a rush, you could stop there and be in pretty decent shape.
Through the first two weeks of this NFL season, teams have gone for 2 (from the 2-yard line) eight times overall. More importantly, of the 30 times that the numbers say they should have gone for 2, they did so just four times, for a rate of 13 percent. Since 2015, in the regular season and playoffs, teams that should have gone for 2 have done so around 15 percent of the time.
Now, of course itâs possible that some teams are better or worse at going for 2 than average, but it isnât possible that 85 percent of teams are worse than average. Iâve also calculated how often teams should âclearlyâ go for 2 — meaning situations in which they should go for it even if they are relatively quite bad at 2-point attempts5 — and there have been 16 such cases through Week 2:6
|WEEK||TEAM||OPPONENT||QUARTER||TIME||SCORE AFTER TD||MAGNITUDE||WENT FOR IT|
|2||New Orleans||New England||4||5:04||-17||0.10|
Teams made the correct decision in four of those 16 cases, for a 25 percent rate. (For comparison: Since 2015, regular season and playoffs combined, teams have gone for 2 points 27 percent of the time in âclear goâ scenarios.)
Of course, a decision being clear-cut doesnât mean that it matters a whole lot, but note that even among the decisions with the most significant consequences, teams are still making the wrong choices regularly (most likely because of adherence to Dick Vermeilâs rigid and outdated system that leads them to repeat the same mistakes over and over). In particular, the aforementioned scenarios of being down 4, 8, or 11 points late are both quite clear and quite important.
Another significant case is when a team scores to pull within 2: Go for 2! This may seem like an obvious one, but since 2015, teams in this situation have chosen to kick the extra point as late as the fourth quarter (once, which is way too many times), and theyâve done so half the time in the third quarter (6 of 12, and still very bad) and 77 percent of the time in the second quarter (10 of 13, and still pretty bad, especially for such an early decision).
This season, teams down 4, 8 or 11 late are holding steady at a 0 percent correct rate, having attempted extra points five out of five times when they âclearlyâ should have gone for it. That means that over the past three season, theyâve gotten these right exactly zero times in 105 chances.
On a slightly brighter note, teams have been down 2 points after a touchdown twice this season — both in the third quarter — and theyâve correctly tried to tie the game both times! Itâs not quite the revolution — it isnât really even shots fired. But maybe, just maybe …
Things That Caught My Eye
It sure looks like the Minnesota Twins are going to snag the American League’s second wild-card slot in the playoffs, and needless to say it’s going to be difficult to get past the recently streaking Indians, top-notch offense of Houston, or the Yankee-Red Sox industrial complex. They’ve got a two in three chance of nabbing the potentially doomed playoff spot. [FiveThirtyEight]
The AFC West — the Kansas City Chiefs, Oakland Raiders, Denver Broncos, and some itinerant caravan of rootless football professionals describing themselves as Chargers — is stacked this year, with the Chiefs, Raiders and Broncos all with higher-than 50 percent chances to make the playoffs according to ESPN’s football power index. [ESPN]
The Las Vegas Golden Knights are 2-0 so far through the NHL preseason, which is their first as a franchise. Technically speaking, that makes them the only entirely undefeated team playing at the moment. Hockey starts up again October 4. [Knights on Ice]
Not including Monday Night Football, the average Week 2 NFL game lasted 3 hours, 4 minutes — down slightly from Week 2 of 2015 and 2016. Obviously we’re going to need a few more weeks of data before making a definitive declaration about the speed of play but early number appear promising. [ESPN]
All baseballs go through the air a little differently — a lower seam here or a smoother ball there marginally affect how they travel — but those slight differences have been getting slighter. Judging by a measure of air resistance, the baseballs used in MLB play since 2008 have been getting more and more internally consistent when it comes to how they fly, which ends up affecting how far they go, which might explain… [FiveThirtyEight]
The number of home runs hit league-wide in the 2017 season when Kansas City’s Alex Gordon connected for one in the eighth inning on Tuesday night, topping the major-league record set in the 2000 season. The league is currently on pace for 6,140 homers. [ESPN]
Leaks from Slack
Sox going to extra innings again. 2nd day in a row.
dammit, @neil, you caused this
I only caused it if they end up losing
by reminding them how lucky 14-3 is in extras
[The Red Sox won and were subsequently 15-3 in extra innings]
NFLSee more NFL predictions
Oh, and don’t forget
Everyone should have seen the Graham-Cassidy Obamacare repeal bill coming. But we didn’t.
Democrats had spent months defending the Affordable Care Act — and they appeared to have succeeded. So just over a week ago, a group of liberal members of the U.S. Senate rolled out their proposal to create a Medicare-for-all program. The group, led by Bernie Sanders, didn’t directly say, “We saved Obamacare, so now it’s time to move on to something even more liberal,” but that was the gist.
How did Democrats end up getting caught so flat-footed, putting out a single-payer proposal that essentially has no chance of becoming law until the White House changes hands while an effort to repeal one of the party’s signature achievements of the last decade gained strength? Because aside from Sens. Bill Cassidy of Louisiana and Lindsey Graham of South Carolina, basically everyone in Washington — Republicans, Democrats, the media — assumed the Obamacare repeal effort was dead. Two weeks ago, President Trump was suggesting that Republicans needed to give up on Obamacare repeal and focus on tax reform, Sen. Lamar Alexander of Tennessee was writing a bipartisan bill to fix Obamacare and Senate Republican leaders were downplaying the possibility that the Obamacare repeal effort would be revived.
So what happened?
Most importantly: Dean Heller of Nevada moved from a weak no to a firm yes — but no one really noticed.
The rise of Graham-Cassidy began on the afternoon of July 27 — hours before the Obamacare repeal effort seemed to die in the Senate. (GOP Sens. Susan Collins, John McCain and Lisa Murkowski formally voted down the “skinny” repeal after 1 a.m. on July 28.)
On that summer Thursday, Heller — who had been one of the Republican holdouts on a bunch of other Obamacare repeal proposals, arguing they cut Medicaid too deeply — became a co-sponsor of the Graham-Cassidy bill. (Estimates suggest Graham-Cassidy will cut federal dollars going to states for health care by up to $400 billion from 2020-2026, much less than the more than $700 billion in estimated Medicaid cuts that were included in some of the proposals Heller opposed.)
It’s not totally clear why Heller signed on to Graham-Cassidy. He may have assumed it would never actually come up for a vote. He may have been worried about re-election: Republican donors in Nevada were reportedly warning Heller that they wouldn’t give him money for his 2018 re-election effort unless he backed Obamacare repeal, and Trump suggested he would oppose Heller in a GOP primary if the senator didn’t join the cause. Or perhaps Heller simply believes in the Graham-Cassidy model of health care policy reform, which would send most Obamacare funds back to states.
Either way, co-sponsoring the bill was an odd move for Heller, largely because he had previously suggested he would back only legislation that both preserved the expanded Medicaid funding Nevada had received through Obamacare and had the support of the state’s GOP governor, Brian Sandoval. Even in July, it was clear that Graham-Cassidy would likely reduce the number of federal dollars going to Nevada for Medicaid, which is further supported by recent estimates. Sandoval didn’t endorse the legislation back then, and this week he joined a bipartisan group of governors opposing it.
Whatever his reasons, Heller’s support was key, making the Senate math much easier for Cassidy and Graham. Back in July, only three GOP senators (Collins, Heller and Murkowski) had been strong opponents of the Obamacare repeal bills, voting down both the full repeal of Obamacare and a partial repeal largely written by Senate Republican Leader Mitch McConnell. (Of the 52 GOP senators, the other 49 voted for at least one of those two provisions.)
The last-ditch “skinny” repeal bill (which did not include Medicaid cuts) was widely expected to pass because Heller supported it, providing what was thought to be the crucial 50th vote. But at the last minute, his “no” vote was replaced by McCain’s.
In other words, at the end of July, Republicans still had two months left to repeal Obamacare and only two real, solid opponents of their repeal ideas: Collins and Murkowski. They were the only ones to vote against all versions of the repeal, though a number of their GOP colleagues had also said they were reluctant to support various bills. Despite expressing concerns about protecting Medicaid, Sens. Shelley Moore Capito of West Virginia, Jerry Moran of Kansas and Rob Portman of Ohio all eventually voted for a version of Obamacare repeal that would have cut Medicaid spending. So did McCain, who said some of his objections to the “skinny” repeal bill were about the process by which it had been written (without any Democratic input and without going through the traditional committees and hearings). Mike Lee of Utah and Rand Paul of Kentucky, two of most conservative GOP senators, had voted for “skinny” repeal, despite complaining that the Obamacare repeal proposals left much of the ACA in place.
So assuming Murkowski and Collins were the only real holdouts, Heller’s support gave the Obamacare repeal 50 votes — at least in theory.
Meanwhile, Cassidy and Graham spent much of August and early September touting their bill. Senate Republican leaders were not enthusiastic about coming back from their summer recess to face another attempt at an Obamacare repeal. Neither were rank-and-file senators. But no senator was actually saying, “I will vote against this bill if it comes to the floor.”
Fast forward to this week and it’s easy to see why Senate Republicans want to give Obamacare repeal a final try. Yes, McCain is a problem, because this bill is, like the July legislation, a GOP-only proposal written outside of the traditional committee process. And he demonstrated in July that he is not afraid to be the deciding vote against an Obamacare repeal.
But McCain has not really given any policy-driven reasons for voting this bill down. And Graham is a very close friend of his. He may still vote yes.
Paul ultimately backed the skinny repeal bill in July despite his early objections, so Republican leaders are probably betting that his threats to vote against this bill are also empty. That’s not an unreasonable assumption.
Collins and Murkowski still sound like “no” votes, and they consistently voted “no” before. But if Collins and Murkowski are the only noes, the Republicans can pass Graham-Cassidy. So look for Paul and McCain to get plenty of calls from the White House and fellow Republicans imploring them to back this legislation, and for the Democrats to back off talking about Medicare-for-all for a bit. In short, the GOP is exactly where it was at the end of July, but with much less time left to get a deal done.
The NFL will take over London for the 18th time — and the 11th consecutive year — this weekend when the Baltimore Ravens take on veteran overseas travelers the Jacksonville Jaguars at Wembley Stadium. The game will be the first of four set in England this season, the most that have been played in a calendar year.
For the NFL, the additional game — there have been three in London each of the past three seasons — represents a concerted effort to expand the popularity and global reach of its brand.7 For the British, it’s another chance to watch lousy football.
It’s no secret that the teams that NFL commissioner Roger Goodell has sent have been overwhelmingly bad — and we aren’t just talking about the Jaguars. According to FiveThirtyEight’s pre-game Elo ratings, the harmonic mean of both teams’ ratings — a balanced measure of matchup quality that can better detect when both teams in a game are either good or bad — has been below average in 13 of the 17 games played in London.8 On top of that, all four games to be played in London this year will be below average, according to the team’s current Elo ratings.
|YEAR||DESIGNATED AWAY||ELO||DESIGNATED HOME||ELO||HARMONIC MEAN||+/- AVERAGE|
|2015||New York Jets||1478||Miami||1449||1463||-37|
|2016||N.Y. Giants||1466||L.A. Rams||1481||1473||-27|
|2009||New England||1630||Tampa Bay||1375||1492||-8|
|2012||New England||1678||St. Louis||1393||1522||22|
|2008||San Diego||1600||New Orleans||1470||1532||32|
The Jaguars are a big part of this, of course. Jacksonville has played in London four times, and the Elo rating of each of those four Jaguar teams ranks in the bottom five (among all 34 teams). Joining them in that bottom five are the 2014 Oakland Raiders. And it turns out that the Raiders’ game against the Miami Dolphins that year was the worst London matchup so far based on our Elo ratings. That game was so dreary that those Raiders, who fell to 0-4 after losing to Miami, fired their coach, Dennis Allen, not long after their plane touched down in the U.S. Perhaps by no coincidence, the Dolphins coach that year, Joe Philbin, would be fired the next season after starting 1-3. Philbin’s last game would be a loss to the Jets … in London.
But not every game played in London has been between NFL bottom feeders — sometimes a good team makes the trip (and, sure, plays a bottom feeder). The Brits have experienced Tom Brady and the New England Patriots twice, as well as the San Francisco 49ers the season after their latest Super Bowl appearance. But if you remove those three teams, the average London team,9 including this year’s Ravens and Jags, has an Elo rating of 1444. That’s roughly on par with this year’s 0-2 Cincinnati Bengals.
NFL fans will generally tune in regardless of who is playing. So perhaps the NFL’s intention was that the consistently poor quality of opponents would be scratched out by competitive, exciting contests. If that’s the case, the plan is generally working.
|YEAR||DESIGNATED AWAY||POINTS||DESIGNATED HOME||POINTS||POINT DIFF||WON BY ONE SCORE|
|2007||New York Giants||13||Miami||10||3||✓|
|2008||San Diego||32||New Orleans||37||5||✓|
|2016||New York Giants||17||L.A. Rams||10||7||✓|
|2015||New York Jets||27||Miami||14||13|
|2009||New England||35||Tampa Bay||7||28|
|2012||New England||45||St. Louis Rams||7||38|
Ten of the 17 games — or 59 percent — have been decided by one score. That might not sound so thrilling, but just 35 percent of all NFL games played since 2007 have been decided by 8 points or fewer. One of last year’s London games was so tightly matched, no one won it. (Fortunately for Cincinnati and Washington, they were playing in the one NFL location where fans are content with a tie.)
Low-quality games usually lead to drops in attendance toward the end of the season. Not in London, though. All but two games have attracted a crowd of more than 80,000, with the highest NFL London crowd at 84,488 — for last year’s tie at Wembley. To put that in context, that average draw would have been the second-highest home attendance of any team in the league last season (behind only the Dallas Cowboys).
As Goodell continues to push some of his most mediocre teams onto the international scene, it turns out that they’re rewarding fans with some of the league’s most competitive play.
You’re reading Significant Digits, a daily digest of the numbers tucked inside the news.
With Nicaragua reportedly set to join the Paris climate accords — they held out in 2015 because the nation believed the deal didn’t go far enough — there are now only two holdouts from the landmark deal: Syria and the United States, which President Trump said would pull out of the agreement. [Bloomberg]
A California bill awaiting the signature of Gov. Jerry Brown would outlaw puppy mills, banning pet stores from selling cats, dogs and bunnies that did not come from a shelter or rescue. [The New York Times]
This is easily the most staggering statistic I have come across while writing this column: There have been 1,772 individual episodes of HGTV’s “House Hunters” since it debuted in 1999. I could watch an episode of “House Hunters” every day for nearly five years without seeing a single repeat. When we’re just a radioactive cinder in the gaze of an expanding sun, whomever or whatever succeeds us will be able to say, “damn … they were good at finding and obtaining houses.” [Vulture]
80,000 311 calls
Hurricane Sandy left an indelible mark on New York City, and the effects of the storm can still be seen and felt years later. More than 36 million calls were placed to NYC’s 311 service from just before Sandy hit in late 2012 through earlier this week. Nearly 80,000 of them were related to the storm. And the tail is super long — 142 such calls were made in 2017 (as of Monday). [FiveThirtyEight]
3.5 million people
Hurricane Maria has left the entire island of Puerto Rico and its 3.5 million residents without power. That’s to say nothing of flooding and other destruction. Maria, now a Category 3 storm, is currently hitting the Dominican Republic. [BBC]
Russian trade with North Korea more doubled to $31.4 million in the first quarter of 2017. Reuters found eight North Korean fuel ships that left Russia ostensibly en route to China or South Korea only to change their final destination to North Korea. [Reuters]
Like Significant Digits? Like sports? You’ll love Besides the Points, our new sports newsletter.
If you see a significant digit in the wild, send it to @WaltHickey.
This is the 11th and final article in a series that reviews news coverage of the 2016 general election, explores how Donald Trump won and why his chances were underrated by most of the American media.
Two Saturday nights ago, just as Hurricane Irma had begun its turn toward Florida, the Associated Press sent out a tweet proclaiming that the storm was headed toward St. Petersburg and not its sister city Tampa, just 17 miles to the northeast across Tampa Bay.
Hurricane forecasts have improved greatly over the past few decades, becoming about three times more accurate at predicting landfall locations. But this was a ridiculous, even dangerous tweet: The forecast was nowhere near precise enough to distinguish Tampa from St. Pete. For most of Irmaâs existence, the entire Florida peninsula had been included in the National Hurricane Centerâs âcone of uncertainty,â which covers two-thirds of possible landfall locations. The slightest change in conditions could have had the storm hitting Floridaâs East Coast, its West Coast, or going right up the stateâs spine. Moreover, Irma measured hundreds of miles across, so even areas that werenât directly hit by the eye of the storm could have suffered substantial damage. By Saturday night, the cone of uncertainty had narrowed, but trying to distinguish between St. Petersburg and Tampa was like trying to predict whether 31st Street or 32nd Street would suffer more damage if a nuclear bomb went off in Manhattan.
To its credit, the AP deleted the tweet the next morning. But the episode was emblematic of some of the mediaâs worst habits when covering hurricanes — and other events that involve interpreting probabilistic forecasts. Before a storm hits, the media demands impossible precision from forecasters, ignoring the uncertainties in the forecast and overhyping certain scenarios (e.g. the storm hitting Miami) at the expense of other, almost-as-likely ones (e.g. the storm hitting Marco Island). Afterward, it casts aspersions on the forecasts unless they happened to exactly match the scenario the media hyped up the most.
Indeed, thereâs a fairly widespread perception that meteorologists performed poorly with Irma, having overestimated the threat to some places and underestimated it elsewhere. Even President Trump chimed in to say the storm hadnât been predicted well, tweeting that the devastation from Irma had been âfar greater, at least in certain locations, than anyone thought.â In fact, the Irma forecasts were pretty darn good: Meteorologists correctly anticipated days in advance that the storm would take a sharp right turn at some point while passing by Cuba. The places where Irma made landfall — in the Caribbean and then in Florida — were consistently within the cone of uncertainty. The forecasts werenât perfect: Irmaâs eye wound up passing closer to Tampa than to St. Petersburg after all, for example. But they were about as good as advertised. And they undoubtedly saved a lot of lives by giving people time to evacuate in places like the Florida Keys.
The media keeps misinterpreting data — and then blaming the data
You wonât be surprised to learn that I see a lot of similarities between hurricane forecasting and election forecasting — and between the mediaâs coverage of Irma and its coverage of the 2016 campaign. In recent elections, the media has often overestimated the precision of polling, cherry-picked data and portrayed elections as sure things when that conclusion very much wasnât supported by polls or other empirical evidence.
As Iâve documented throughout this series, polls and other data did not support the exceptionally high degree of confidence that news organizations such as The New York Times regularly expressed about Hillary Clintonâs chances. (Weâve been using the Times as our case study throughout this series, both because theyâre such an important journalistic institution and because their 2016 coverage had so many problems.) On the contrary, the more carefully one looked at the polling, the more reason there was to think that Clinton might not close the deal. In contrast to President Obama, who overperformed in the Electoral College relative to the popular vote in 2012, Clintonâs coalition (which relied heavily on urban, college-educated voters) was poorly configured for the Electoral College. In contrast to 2012, when hardly any voters were undecided between Obama and Mitt Romney, about 14 percent of voters went into the final week of the 2016 campaign undecided about their vote or saying they planned to vote for a third-party candidate. And in contrast to 2012, when polls were exceptionally stable, they were fairly volatile in 2016, with several swings back and forth between Clinton and Trump — including the final major swing of the campaign (after former FBI Director James Comeyâs letter to Congress), which favored Trump.
By Election Day, Clinton simply wasnât all that much of a favorite; she had about a 70 percent chance of winning according to FiveThirtyEightâs forecast, as compared to 30 percent for Trump. Even a 2- or 3-point polling error in Trumpâs favor — about as much as polls had missed on average, historically — would likely be enough to tip the Electoral College to him. While many things about the 2016 election were surprising, the fact that Trump narrowly won10 when polls had him narrowly trailing was an utterly routine and unremarkable occurrence. The outcome was well within the âcone of uncertainty,â so to speak.
So if the polls called for caution rather than confidence, why was the media so sure that Clinton would win? Iâve tried to address that question throughout this series of essays — which weâre finally concluding, much to my editorâs delight.11
Probably the most important problem with 2016 coverage was confirmation bias — coupled with what you might call good old-fashioned liberal media bias. Journalists just didnât believe that someone like Trump could become president, running a populist and at times also nationalist, racist and misogynistic campaign in a country that had twice elected Obama and whose demographics supposedly favored Democrats. So they cherry-picked their way through the data to support their belief, ignoring evidence — such as Clintonâs poor standing in the Midwest — that didnât fit the narrative.
But the mediaâs relatively poor grasp of probability and statistics also played a part: It led them to misinterpret polls and polling-based forecasts that could have served as a reality check against their overconfidence in Clinton.
How a probabilistic election forecast works — and how it can be easy to misinterpret
The idea behind an election forecast like FiveThirtyEightâs is to take polls (âClinton is ahead by 3 pointsâ) and transform them into probabilities (âShe has a 70 percent chance of winningâ). Iâve been designing and publishing forecasts like these for 15 years12 in two areas (politics and sports) that receive widespread public attention. And Iâve found there are basically two ways that things can go wrong.
First, there are errors of analysis. As an example, if you had a model of last yearâs election that concluded that Clinton had a 95 or 99 percent chance of winning, you committed an analytical error.13 Models that expressed that much confidence in her chances had a host of technical flaws, such as ignoring the correlations in outcomes between states.14
But while statistical modeling may not always hit the mark, peopleâs subjective estimates of how polls translate into probabilities are usually even worse. Given a complex set of polling data — say, the Democrat is ahead by 3 points in Pennsylvania and Michigan, tied in Florida and North Carolina, and down by 2 points in Ohio — itâs far from obvious how to figure out the candidateâs chances of winning the Electoral College. Ad hoc attempts to do so can lead to problematic coverage like this article that appeared in the The New York Times last Oct. 31, three days after Comey had sent his letter to Congress:
Mrs. Clintonâs lead over Mr. Trump appears to have contracted modestly, but not enough to threaten her advantage over all or to make the electoral math less forbidding for Mr. Trump, Republicans and Democrats said. […]
The loss of a few percentage points from Mrs. Clintonâs lead, and perhaps a state or two from the battleground column, would deny Democrats a possible landslide and likely give her a decisive but not overpowering victory, much like the one President Obama earned in 2012. […]
Youâll read lots of clips like this during an election campaign, full of claims about the âelectoral math,â and they often donât hold up to scrutiny. In this case, the articleâs assertion that the loss of âa few percentage pointsâ wouldnât hurt Clintonâs chances of victory was wrong, and not just in hindsight; instead, the Comey letter made Clinton much more vulnerable, roughly doubling Trumpâs probability of winning.
But even if you get the modeling right, thereâs another whole set of problems to think about: errors of interpretation and communication. These can run in several different directions. Consumers can misunderstand the forecasts, since probabilities are famously open to misinterpretation. But people making the forecasts can also do a poor job of communicating the uncertainties involved. For example, although weather forecasters are generally quite good at describing uncertainty, the cone of uncertainty is potentially problematic because viewers might not realize it represents only two-thirds of possible landfall locations.
Intermediaries — other people describing a forecast on your behalf — can also be a problem. Over the years, weâve had many fights with well-meaning TV producers about how to represent FiveThirtyEightâs probabilistic forecasts on air. (We donât want a state where the Democrat has only a 51 percent chance to win to be colored in solid blue on their map, for instance.) And critics of statistical forecasts can make communication harder by passing along their own misunderstandings to their readers. After the election, for instance, The New York Timesâ media columnist bashed the newspaperâs Upshot model (which had estimated Clintonâs chances at 85 percent) and others like it for projecting âa relatively easy victory for Hillary Clinton with all the certainty of a calculus solution.â Thatâs pretty much exactly the wrong way to describe such a forecast, since a probabilistic forecast is an expression of uncertainty. If a model gives a candidate a 15 percent chance, youâd expect that candidate to win about one election in every six or seven tries. You wouldnât expect the fundamental theorem of calculus to be wrong â¦ ever.
I donât think we should be forgiving of innumeracy like this when it comes from prominent, experienced journalists. But when it comes to the general public, thatâs a different story — and there are plenty of things for FiveThirtyEight and other forecasters to think about in terms of our communication strategies. There are many potential avenues for confusion. People associate numbers with precision, so using numbers to express uncertainty in the form of probabilities might not be intuitive. (Listing a decimal place in our forecast, as FiveThirtyEight historically has done — e.g. 28.6 percent chance rather than 29 percent or 30 — probably doesnât help in this regard.) Also, both probabilities and polls are usually listed as percentages, so people can confuse one for the other — they might mistake a forecast showing Clinton with a 70 percent chance of winning as meaning she has a 70-30 polling lead over Trump, which would put her on her way to a historic, 40-point blowout.15
What can also get lost is that election forecasts — like hurricane forecasts — represent a continuous range of outcomes, none of which is likely to be exactly right. The following diagram is an illustration that weâve used before to show uncertainty in the FiveThirtyEight forecast. Itâs a simplification — showing a distribution for the national popular vote only and which candidate wins the Electoral College.16 Still, the diagram demonstrates several important concepts for interpreting polls and forecasts:
- First, as I mentioned, no exact outcome is all that likely. If you rounded the popular vote to the nearest whole number, the most likely outcome was Clinton winning by 4 percentage points. Nonetheless, the chance that sheâd win by exactly 4 points17 was only about 10 percent. âCallingâ every state correctly in the Electoral College is even harder. FiveThirtyEightâs model did it in 2012 — in a lucky break18 that may have given people a false impression about how easy it is to forecast elections — but we estimated that the chances of having a perfect forecast again in 2016 were only about 2 percent. Thus, properly measuring the uncertainty is at least as important a part of the forecast as plotting the single most likely course. Youâre almost always going to get something âwrongâ — so the question is whether you can distinguish the relatively more likely upsets from the relatively less likely ones.
- Second, the distribution of possible outcomes was fairly wide last year. The distribution is based on how accurate polls of U.S. presidential elections have been since 1972, accounting for the number of undecideds and the number of days until the election. The distribution was wider than usual because there were a lot of undecided voters — and more undecided voters mean more uncertainty. Even in a normal year, however, the polls arenât quite as precise as most people assume.
- Third, the forecast is continuous, rather than binary. When evaluating a poll or a polling-based forecast, you should look at the margin between the poll and the actual result and not just who won and lost. If a poll showed the Democrat winning by 1 point and the Republican won by 1 point instead, the poll did a better job than if the Democrat had won by 9 points (even though the poll would have âcalledâ the outcome correctly in the latter case). By this measure, polls in this yearâs French presidential election — which Emmanuel Macron was predicted to win by 22 points but actually won by 32 points — were much worse than polls of the 2016 U.S. election.
- Finally, the actual outcome in last yearâs election was right in the thick of the probability distribution, not out toward the tails. The popular vote was obviously pretty close to what the polls estimated it would be. It also wasnât that much of a surprise that Trump won the Electoral College, given where the popular vote wound up. (Our forecast gave Trump a better than a 25 percent chance of winning the Electoral College conditional on losing the popular vote by 2 points,19 an indication of his demographic advantages in the swing states.) One might dare even say that the result last year was relatively predictable, given the range of possible outcomes.
The press presumed that Clinton would win, but the public saw a close race
Iâve often heard it asserted that the widespread presumption of an inevitable Clinton victory was itself a problem for her campaign20 — Clinton has even made a version of his claim herself. So we have to ask: Could this misreading of the polls — and polling-based forecasts — actually have affected the electionâs outcome?
It depends on whether youâre talking about how the media and other political elites read the polls — and how that influenced their behavior — or how the general public did. Regular voters, it turns out, were not especially confident about Clintonâs chances last year. For instance, in the final edition of the USC Dornsife/Los Angeles Times tracking poll, which asked voters to guess the probability of Trump and Clinton winning the election, the average voter gave Clinton only a 53 percent chance of winning and gave Trump a 43 percent chance — so while respondents slightly favored Clinton, it wasnât with much confidence at all.
The American National Election Studies also asked voters to predict the most likely winner of the race, as itâs been doing since 1952. It found that 61 percent of voters expected Clinton to win, as compared to 33 percent for Trump.21 This proportion is about the same as other years — such as 2004 — in which polls showed a fairly close race, although one candidate (in that case, George W. Bush) was usually ahead. While, unlike the LA Times poll, the ANES did not ask voters to estimate the probability of Clinton winning, it did ask voters a follow-up question about whether they expected the election to be close or thought one of the candidates would âwin by quite a bit.â Only 20 percent of respondents predicted a Clinton landslide, and only 7 percent expected a Trump landslide. Instead, almost three-quarters of voters correctly predicted a close outcome.
|WHICH PARTY WILL WIN THE PRESIDENCY?|
So be wary if you hear people within the media bubble22 assert that âeveryoneâ presumed Clinton was sure to win. Instead, that presumption reflected elite groupthink — and it came despite the polls as much as because of the polls. There was a bewilderingly large array of polling data during last yearâs campaign, and it didnât always tell an obvious story. During the final week of the campaign, Clinton was ahead in most polls of most swing states, but with quite a few exceptions23 — and many of Clintonâs leads were within the margin of error and had been fading during the final 10 days of the campaign. The public took in this information and saw Clinton as the favorite, but they didnât expect a blowout and viewed the outcome as highly uncertain. Our model read it the same way. The media looked at the same ambiguous data and saw what they wanted in it, using it confirm their presumption that Trump couldnât win.
News organizations learned the wrong lessons from 2012
During the 2012 election, FiveThirtyEightâs forecast consistently gave Obama better odds of winning re-election than the conventional wisdom did. Somehow in the midst of it, I became an avatar for projecting certainty in the face of doubt. But this role was always miscast — even quite opposite of what I hope readers take away from FiveThirtyEightâs work. In addition to making my own forecasts, Iâve spent a lot of my life studying probability and uncertainty. Cover these topics for long enough and youâll come to a fairly clear conclusion: When it comes to making predictions, the world usually needs less certainty, not more.
A major takeaway from my book and from other peopleâs research on prediction is that most experts — including most journalists — make overconfident forecasts. (Weather forecasters are an important exception.) Events that experts claim to be nearly certain (say, a 95 percent probability) are often merely probable instead (the real probability is, say, 70 percent). And events they deem to be nearly impossible occur with some frequency. Another, related type of bias is that experts donât change their minds quickly enough in the face of new information,24 sticking stubbornly to their previous beliefs even after the evidence has begin to mount against them.
Media coverage of major elections had long been an exception to this rule of expert overconfidence. For a variety of reasons — no doubt including the desire to inject drama into boring races — news coverage tended to overplay the underdogâs chances in presidential elections and to exaggerate swings in the polls. Even in 1984, when Ronald Reagan led Walter Mondale by 15 to 20 percentage points in the stretch run of the campaign, The New York Times somewhat credulously reported on Mondaleâs enthusiastic crowds and talked up the possibility of a Dewey-defeats-Truman upset. The 2012 election — although it was a much closer race than 1984 — was another such example: Reporting focused too much on national polls and not enough on Obamaâs Electoral College advantage, and thus portrayed the race as a âtoss-upâ when in reality Obama was a reasonably clear favorite. (FiveThirtyEightâs forecast gave Obama about a 90 percent chance of winning re-election on election morning.)
Since then, the pendulum has swung too far in the other direction, with the media often expressing more certainty about the outcome than is justified based on the polls. In addition to lowballing the chances for Trump, the media also badly underestimated the probability that the U.K. would leave the European Union in 2016, and that this yearâs U.K. general election would result in a hung parliament, for instance. There are still some exceptions — the conventional wisdom probably overestimated Marine Le Penâs chances in France. Nonetheless, thereâs been a noticeable shift from the way elections used to be covered, and itâs worth pausing to consider why that is.
One explanation is that news organizations learned the wrong lessons from 2012. The âmoral parableâ of 2012, as Scott Alexander wrote, is that Romney was âthe arrogant fool who said that all the evidence against him was wrong, but got his comeuppance.â Put another way, the lesson of 2012 was to âtrust the data,â especially the polls.
FiveThirtyEight and I became emblems of that narrative, even though we sometimes tried to resist it. What I think people forget is that the confidence our model expressed in Obamaâs chances in 2012 was contingent upon circumstances peculiar to 2012 — namely that Obama had a much more robust position in the Electoral College than national polls implied, and that there were very few undecided voters, reducing uncertainty. The 2012 election may have superficially looked like a toss-up, but Obama was actually a reasonably clear favorite. Pretty much the opposite was true in 2016 — the more carefully one evaluated the polls, the more uncertain the outcome of the Electoral College appeared. The real lesson of 2012 wasnât âalways trust the pollsâ so much as âbe rigorous in your evaluation of the polls, because your superficial impression of them can be misleading.â
Another issue is that uncertainty is a tough sell in a competitive news environment. âThe favorite is indeed favored, just not by as much as everyone thinks once you look at the data more carefully, so bet on the favorite at even money but the underdog against the point spreadâ isnât that complicated a story, but it can be a difficult message to get across on TV in the midst of an election campaign when everyone has the attention span of a sugar-high 4-year-old. It can be even harder on social media, where platforms like Facebook reward simplistic coverage that confirms peopleâs biases.
Journalists should be wary of âthe narrativeâ and more transparent about their provisional understanding of developing stories
But every news organization faced competitive pressure in covering last yearâs election — and only some of them screwed up the story. Editorial culture mattered a lot. In general, the problems were worse at The New York Times and other organizations that (as Michael Cieply, a former Times editor, put it) heavily emphasized âthe narrativeâ of the campaign and encouraged reporters to âgenerate stories that fit the pre-designated line.â
If you re-read the Timesâ general election coverage from the conventions onward,25 youâll be struck by how consistent it was from start to finish. Although the polls were fairly volatile in 2016, you canât really distinguish the periods when Clinton had a clear advantage from those when things were pretty tight. Instead, the narrative was consistent: Clinton was a deeply flawed politician, the âworst candidate Democrats could have run,â cast in âshadowsâ and âdoubtsâ because of her ethical lapses. However, she was almost certain to win because Trump appealed to too narrow a range of demographic groups and ran an unsophisticated campaign, whereas Clintonâs diverse coalition and precise voter-targeting efforts gave her an inherent advantage in the Electoral College.
It was a consistent story, but it was consistently wrong.
One can understand why news organizations find âthe narrativeâ so tempting. The world is a complicated place, and journalists are expected to write authoritatively about it under deadline pressure. Thereâs a management consulting adage that says when creating a product, you can pick any two of these three objectives: 1. fast, 2. good and 3. cheap. You can never have all three at once. The equivalent in journalism is that a story can be 1. fast, 2. interesting and/or 3. true — two out of the three — but itâs hard for it to be all three at the same time.
Deciding on the narrative ahead of time seems to provide a way out of the dilemma. Pre-writing substantial portions of the story — or at least, having a pretty good idea of what youâre going to say — allows it to be turned around more quickly. And narratives are all about wrapping the story up in a neat-looking package and telling readers âwhat it all means,â so the story is usually engaging and has the appearance of veracity.
The problem is that youâre potentially sacrificing No. 3, âtrue.â By bending the facts to fit your template, you run the risk of getting the story completely wrong. To make matters worse, most people — including most reporters and editors (also: including me) — have a strong tendency toward confirmation bias. Presented with a complicated set of facts, it takes a lot of work for most of us not to connect the dots in a way that confirms our prejudices. An editorial culture that emphasizes âthe narrativeâ indulges these bad habits rather than resists them.
Instead, news organizations reporting under deadline pressure need to be more comfortable with a world in which our understanding of developing stories is provisional and probabilistic — and will frequently turn out to be wrong. FiveThirtyEightâs philosophy is basically that the scientific method, with its emphasis on verifying hypotheses through rigorous analysis of data, can serve as a model for journalism. The reason is not because the world is highly predictable or because data can solve every problem, but because human judgment is more fallible than most people realize — and being more disciplined and rigorous in your approach can give you a fighting chance of getting the story right. The world isnât one where things always turn out exactly as we want them to or expect them to. But itâs the world we live in.
CORRECTION (Sept. 21, 2:40 p.m.): A previous version of footnote No. 10 mistakenly referred to the Electoral College in place of the national popular vote when discussing Trumpâs chances of winning the election. The article has been updated.