[syndicated profile] 538_feed

Posted by Maggie Koerth-Baker

President Trump’s voter fraud commission has the stated goal of ensuring the integrity of the vote as “the foundation of our democracy.” But, like the buried foundations of a building, who votes and how they vote aren’t easy things to examine.

In alleging that there’s widespread voter fraud, commission Vice Chair Kris Kobach has relied on proxies, such as the indirect measure of matching up names in voter registries to identify people registered in more than one state. In the lead-up to the commission’s second meeting last week, he also railed against thousands of New Hampshire voters who registered using out-of-state licenses — which he claimed proved that people were hopping state borders to illegally swing elections.

The experts I spoke with said those metrics don’t really measure the existence or risk of illegal voting. In fact, they said, it’s probably impossible to conclusively prove or disprove allegations of widespread illegal voting — though they pointed out that very few cases have ever been found and prosecuted, even as Kobach is aggressively seeking them out to prove his hypothesis of rampant voter fraud.

When Kobach employs these proxies as proof of voter fraud, though, he is implicitly suggesting that changes need to be made to the voting system to protect its integrity, such as ensuring that the same name never turns up on multiple registries and voters never use out-of-state licenses at the polls. But those irregularities exist because of the fundamental American values the commission is dedicated to protecting: You can’t easily and swiftly clean up registry errors without disenfranchising millions of voters. And you can’t set up a uniform, nationalized voter registry in a country whose founding values are based on limited federal control.

The problem with proxies is that they do more to demonstrate the complex nature of American values than they do to prove our elections are rigged.

If Kobach were simply claiming that voter registries are messy — full of errors and inaccuracies — he’d be correct. Research published by the Pew Center on the States in 2012 estimated that 24 million registration records (13 percent of all the registrations in the country) contained information that was likely inaccurate — names that had changed, addresses that were no longer up-to-date, people who had died, simple typos. And double-registered voters — a favorite target of Kobach’s — reached nearly 3 million. Likewise, he’s also right that people do sometimes vote in states where they aren’t officially residents. That’s particularly true of college students, who might spend most of their time in a place they don’t technically live. Depending on local laws, those students can use out-of-state licenses to prove their identities at the ballot box.

But experts say that neither of these proxies is particularly good evidence of illegal voting. Primarily, that’s because both things are 100 percent legal and exist for reasons that have nothing to do with fraud. Take double registration, for instance: When Americans are double registered, it’s usually because they’ve moved and their names were never cleared out of the system in their previous state of residence.

We did a quick survey of FiveThirtyEight staffers by checking voter registration rolls in the states they’ve lived in over the past 15 years. Out of 15 people who participated, five were double-registered. I’m one of them, with active voter registrations in Minnesota, where I live, and Alabama, a state I last lived in in 2006. Three staffers were only registered in states they no longer live in. One person wasn’t registered anywhere, much to his surprise. Bottom line: Americans don’t stay in one place forever, and bureaucracy doesn’t always keep up with us.

Then there’s the specter of out-of-state voters. Kobach claimed that more than 5,000 people had come to New Hampshire from other states to vote in (and try to change the outcome of) the November election. His proof was a list of people who had taken advantage of New Hampshire’s same-day registration laws, had used out-of-state driver’s licenses to verify their identities and had not later applied for New Hampshire licenses or vehicle registrations. Kobach has received plenty of pushback on the idea that this meant they weren’t legitimate Granite State voters, including from other members of the commission during last week’s meeting. That’s because it’s likely that many of those people whom he called fraudulent voters were actually college students voting in New Hampshire because that’s where they spent most of their time and where they were living when Election Day rolled around. The Washington Post found several individuals who attested to having done just that, and the cities with the highest number of out-of-state-license voters were college towns.

Just because these practices don’t prove voter fraud, though, doesn’t mean they aren’t confusing and even at times problematic. It’s certainly not ideal to have voter registries loaded with the “dead wood” of misspelled names and people who’ve left the state, said Charles Stewart, professor of political science at MIT. Those errors can prevent people from voting if, say, their current address and registry address don’t match. People in that situation could be turned away or forced to file provisional ballots.

And Stewart said he believes they suggest deeper administrative problems — especially when the state doesn’t know exactly how many errors its voter rolls contain. “What if a school said, ‘We don’t know how many people graduated’? We’d be really suspicious of public officials that had sloppy reporting,” Stewart said. “It’s generally good public policy to have good records.”

That’s why states go through the process of cleaning up voter registration rolls — removing the dead and the people who have left the state to try to maintain an accurate count of voters. But here’s where American values conflict with clean database management: You can’t just unceremoniously purge people from the records because they haven’t voted in a while or because they appear to be registered in another state, said Walter Mebane, professor of political science and statistics at the University of Michigan.

The National Voter Registration Act prevents states from doing just that because it’s likely to end up illegally stripping people of their right to vote.1 States have to go through a process of trying to match voter registry records to other kinds of data and alerting voters if it looks like they should be removed. There’s no uniform procedure for this, and the quality of registry maintenance (and election administration in general) varies widely from state to state. The courts are still hashing out what is and isn’t appropriate. For instance, the Supreme Court will hear arguments in November in a case on Ohio’s registry maintenance methodology, which purged voters from the rolls if they hadn’t voted in six years.

You could fix the problem — and probably make it easier to see if people have truly double-voted, not just double-registered — by having a single national voter registry, Mebane told me. “But there’s no reason to worry about that because it would never happen,” he said, explaining that it be would anathema to our national values.

Those values strongly favor local control of elections, even when it’s not the most efficient choice. It dates back to the beginnings of the country, when county officials tallied in-person voice votes from citizens who didn’t need to be registered at all. As things like the secret ballot and voter registration were added into the mix, cities, counties and states came up with different ways to handle the new complications, collect the records and administer the elections. Today, elections are governed by states, but a lot of the nuts-and-bolts management still happens at the city or county level — often in ways that vary from one town to another. And shifting away from that diverse local control probably wouldn’t be terribly popular, given that Americans’ confidence in election results and fair handling of votes decreases as the level of administration moves further from where they live.

The same is true with out-of-state voting: You can simplify the system, but that would conflict with other values. Courts have repeatedly said students can vote where they study. “Nobody can lose their right to vote because of issues with residency as a student,” said Marc Meredith, professor of political science at the University of Pennsylvania — something that would be likely to happen if students were forced to travel back to their home states on Election Day in the middle of their fall semesters.

But Americans are generally less supportive of students voting outside their home states than we are of other 20th-century voting reforms, Stewart said. “There’s a sizeable number of people in the public who just believe that college students should vote where their parents live.”

He based that on the unpublished results of questions he asked in the Cooperative Congressional Elections Study in 2013. Although most Americans — 65 percent — said expanding where students could vote improved elections, respondents were less supportive of that than they were about other kinds of reforms — like extending the vote to women.

In other words, Americans are both suspicious of thousands of people from “someplace else” tipping an election and have also set up the legal system to support expansion and protection of the right to vote, even for people who are, technically, from someplace else. The result is a jumble of laws that make the ability of college students to vote — and what forms of ID and documentation they have to bring with them to the polls — vary unpredictably from state to state, even county to county. Even someone like Kobach — a state election official who has made his national career on issues surrounding election transparency — can’t be expected to know what is legal and what isn’t nationwide, experts told me. There’s just too much diversity.

But the data mess explains why it’s difficult to make a case around voter fraud from either side. Just because a situation isn’t ideal doesn’t mean it’s proof of illegal voting. Instead, Meredith said, he wishes Kobach and the commission would focus on finding better ways to systematically study voting — ways that line up with both the needs of researchers and American values. “Your hope would be that’s what a voter integrity commission would be,” he said. “Rather than jumping to conclusions on the basis of proxies that may or may not have validity.”

The GOP’s Catch-22 On Obamacare

Sep. 22nd, 2017 09:49 am
[syndicated profile] 538_feed

Posted by Harry Enten

Welcome to Pollapalooza, our weekly polling roundup. Today’s theme song: “Everything’s Relative.”

Poll of the week

Republicans in the U.S. Senate have just over a week, until Sept. 30, to pass an Obamacare repeal bill with a bare majority (instead of 60 votes). But in the rush of whip counts and CBO scores, don’t forget: This is an incredibly dangerous debate for Republicans. The public, through a variety of poll results, has made plain that it doesn’t like what the GOP is doing.

The latest YouGov poll, for example, found that 38 percent of respondents picked Democrats as the party that would do “a better job handling the problem of health care”; 24 percent picked Republicans. The Affordable Care Act, meanwhile, has a positive net favorable rating, and the various GOP repeal-and-replace bills have generally polled terribly.

President Trump should also be worried about an unpopular health care bill passing. His overall job approval rating has climbed in recent weeks as news networks have been focused on hurricanes, but his approval rating has tended to decline when Americans are more focused on the health care debate. Trump himself has an approval rating of just 27 percent on the issue of health care, according to the latest NBC/Wall Street Journal survey.

So why are Trump and congressional Republicans barreling on anyway? Republican voters want them to. According to a Politico/Harvard T.H. Chan School of Public Health poll, 53 percent of Republicans said repealing and replacing Obamacare was an “extremely important priority” for them. That 53 percent was higher than it was for any other issue polled.2 Lowering taxes, which Republicans are also gearing up to do, was rated as extremely important by just 34 percent of Republicans.

The question therefore for Republicans is whether they want to pass a bill and upset the electorate at large or leave a seven-year promise to repeal Obamacare unfulfilled and upset their base. Neither option is all that appealing politically.

Other polling nuggets

  • It’s close in Virginia — Democrats were perhaps hoping that Trump’s unpopularity would allow Ralph Northam to run away with the Virginia governor’s race. It hasn’t happened. In an average of five surveys conducted this month, Northam is nursing a 45 percent to 41 percent lead over Republican Ed Gillespie. Northam may have more room to grow because African-Americans, who overwhelming vote Democratic, tend to make up a disproportionate share of undecideds in these polls. But also remember that the link between how voters feel about a president and how they vote for governor isn’t as strong as you might think.
  • How students understand free speech — UCLA Professor John Villasenor published a poll this week in which college students offered their opinions on free speech. Among the findings: A plurality of students said the First Amendment does not protect hate speech (44 percent to 39 percent). A slim majority said it is OK for students to shout down a guest speaker (51 percent to 49 percent). And finally, 19 percent of all students (and 30 percent of male students) said it was OK for students to use violence to prevent someone from speaking. I highly suggest reading the entire poll.
  • Moore remains ahead in Alabama — The Alabama Republican primary runoff is Tuesday, and the GOP establishment should be worried. Firebrand conservative Roy Moore led Sen. Luther Strange in two polls released this week — 53 percent to 47 percent in a Strategy Research poll and 50 percent to 42 percent in a JMC Analytics poll. Still, Moore’s 8-point margin in the latter poll is down from 19 points the last time JMC Analytics surveyed the race. Put another way: Moore is the favorite, but don’t be shocked if Strange pulls it out.
  • Bill de Blasio is cruising to re-election — After New York Mayor Bill de Blasio captured nearly 75 percent of the Democratic primary vote last week, a new Marist College poll suggests that he may come close to that percentage in November’s general election. De Blasio was ahead 65 percent to 18 percent over Republican Nicole Malliotakis. Perhaps that shouldn’t be too surprising given the heavy Democratic registration edge in New York City. Remember, though, that New York didn’t elect a Democratic mayor in any of the five elections before de Blasio won in 2013.

Trump’s job approval ratings

Trump’s job approval rating is 39.5 percent. His disapproval rating is 53.6. Both of those are improvements for Trump over last week’s 38.5 percent to 55.6 percent spread, and they continue a longer-term positive trend for the president. Just last month, his approval rating was below 37 percent, and his disapproval rating was above 57 percent. The timing of Trump’s improved numbers lines up pretty well with Hurricane Harvey making landfall in the U.S.

The generic ballot

Democrats are ahead of Republicans 46.4 percent to 38.6 percent on the generic congressional ballot. That’s a slight improvement for Republicans from last week when they were down 45.5 percent to 36.0 percent.

[syndicated profile] 538_feed

Posted by Benjamin Morris

Before the Super Bowl in February, we published a fairly comprehensive guide for when to go for 2, simplified into one slightly complicated (but very easy to use once you get the hang of it!) chart. In addition to hopefully demystifying how to judge a lot of borderline situations, we identified some fairly clear-cut cases in which NFL coaches should choose to go for 2 but don’t. Ever.

My hope, of course, was that teams would read this (or figure it out on their own) and that we’d see an immediate and cataclysmic shift in 2-point strategy — like going for it when down 4, 8, or 11 after scoring a touchdown late (which are not only real cases, but ones that are usually clear-cut and significant). But, alas, no such luck.

The logic is pretty simple: If you can estimate your team’s chances of winning with an X point lead/deficit (X points being how many points you are up or down following a touchdown) and your chances of winning with X+1 and X+2, the decision follows from simple arithmetic. In fact, given that 2-point attempts and extra-point attempts taken from the 15-yard line (under the new rules implemented in 2015) now have roughly the same expected point value (both around 0.95 points), the choice is easier than ever. Simply calculate (or estimate):

  • The improvement in win percentage if your point margin changed from X to X+1.
  • The improvement in win percentage if your point margin changed from X+1 to X+2.

If the first number is greater, kick the extra point. If the second is, go for 2.

Now, you can estimate or intuit these differences on your own on the fly, or you can use a fancy win probability model like we have,3 but the logic is the same.

Of course, we’ve taken it a bit further — our chart uses multiple sets of assumptions to create a range for each scenario covering teams that are relatively better or worse at 2-point conversions than our baseline. In case you missed it, here’s the chart:4

A quick note on reading this chart: It may look a little “loud,” but that’s a feature for looking up scenarios lightning-fast. For a quick approximation, you first look at the minichart corresponding to the point spread (after the touchdown). If the quarter you’re in is shaded bright purple, you probably want to kick; if it’s bright orange, you should probably go for it. If you’re in a rush, you could stop there and be in pretty decent shape.

Through the first two weeks of this NFL season, teams have gone for 2 (from the 2-yard line) eight times overall. More importantly, of the 30 times that the numbers say they should have gone for 2, they did so just four times, for a rate of 13 percent. Since 2015, in the regular season and playoffs, teams that should have gone for 2 have done so around 15 percent of the time.

Now, of course it’s possible that some teams are better or worse at going for 2 than average, but it isn’t possible that 85 percent of teams are worse than average. I’ve also calculated how often teams should “clearly” go for 2 — meaning situations in which they should go for it even if they are relatively quite bad at 2-point attempts5 — and there have been 16 such cases through Week 2:6

Times when teams clearly should have gone for 2

2017 NFL season through Week 2

WEEK TEAM OPPONENT QUARTER TIME SCORE AFTER TD MAGNITUDE WENT FOR IT
1 Cleveland Pittsburgh 4 3:36 -5 2.23 ✓
1 L.A. Chargers Denver 4 7:00 -4 1.62
1 Chicago Atlanta 4 7:26 -4 1.33
1 Detroit Arizona 3 3:07 -2 1.28 ✓
2 Arizona Indianapolis 4 7:38 -4 1.28
1 N.Y. Jets Buffalo 3 2:00 -2 1.24 ✓
1 Detroit Arizona 4 9:27 4 0.43 ✓
1 L.A. Chargers Denver 4 8:10 -11 0.43
1 Jacksonville Houston 2 0:49 18 0.29
1 Baltimore Cincinnati 2 1:28 16 0.29
1 Houston Jacksonville 3 9:09 -13 0.24
2 Cleveland Baltimore 2 4:56 -8 0.24
2 New Orleans New England 4 5:04 -17 0.10
2 Tennessee Jacksonville 3 2:49 19 0.05
2 Dallas Denver 4 14:24 -19 0.05
2 Philadelphia Kansas City 4 0:08 -8 0.05

Magnitude is the amount that a team’s expected win percentage is improved by making the right decision.

Source: ESPN Stats & Information Group.

Teams made the correct decision in four of those 16 cases, for a 25 percent rate. (For comparison: Since 2015, regular season and playoffs combined, teams have gone for 2 points 27 percent of the time in “clear go” scenarios.)

Of course, a decision being clear-cut doesn’t mean that it matters a whole lot, but note that even among the decisions with the most significant consequences, teams are still making the wrong choices regularly (most likely because of adherence to Dick Vermeil’s rigid and outdated system that leads them to repeat the same mistakes over and over). In particular, the aforementioned scenarios of being down 4, 8, or 11 points late are both quite clear and quite important.

Another significant case is when a team scores to pull within 2: Go for 2! This may seem like an obvious one, but since 2015, teams in this situation have chosen to kick the extra point as late as the fourth quarter (once, which is way too many times), and they’ve done so half the time in the third quarter (6 of 12, and still very bad) and 77 percent of the time in the second quarter (10 of 13, and still pretty bad, especially for such an early decision).

This season, teams down 4, 8 or 11 late are holding steady at a 0 percent correct rate, having attempted extra points five out of five times when they “clearly” should have gone for it. That means that over the past three season, they’ve gotten these right exactly zero times in 105 chances.

On a slightly brighter note, teams have been down 2 points after a touchdown twice this season — both in the third quarter — and they’ve correctly tried to tie the game both times! It’s not quite the revolution — it isn’t really even shots fired. But maybe, just maybe …

[syndicated profile] 538_feed

Posted by Walt Hickey

Things That Caught My Eye

For whom the AL wild-card slot tolls

It sure looks like the Minnesota Twins are going to snag the American League’s second wild-card slot in the playoffs, and needless to say it’s going to be difficult to get past the recently streaking Indians, top-notch offense of Houston, or the Yankee-Red Sox industrial complex. They’ve got a two in three chance of nabbing the potentially doomed playoff spot. [FiveThirtyEight]

More like AFC Best you know?

The AFC West — the Kansas City Chiefs, Oakland Raiders, Denver Broncos, and some itinerant caravan of rootless football professionals describing themselves as Chargers — is stacked this year, with the Chiefs, Raiders and Broncos all with higher-than 50 percent chances to make the playoffs according to ESPN’s football power index. [ESPN]

Technically undefeated!

The Las Vegas Golden Knights are 2-0 so far through the NHL preseason, which is their first as a franchise. Technically speaking, that makes them the only entirely undefeated team playing at the moment. Hockey starts up again October 4. [Knights on Ice]

NFL games getting shorter?

Not including Monday Night Football, the average Week 2 NFL game lasted 3 hours, 4 minutes — down slightly from Week 2 of 2015 and 2016. Obviously we’re going to need a few more weeks of data before making a definitive declaration about the speed of play but early number appear promising. [ESPN]

Baseballs approach their platonic ideal

All baseballs go through the air a little differently — a lower seam here or a smoother ball there marginally affect how they travel — but those slight differences have been getting slighter. Judging by a measure of air resistance, the baseballs used in MLB play since 2008 have been getting more and more internally consistent when it comes to how they fly, which ends up affecting how far they go, which might explain… [FiveThirtyEight]


Big Number

5,694

The number of home runs hit league-wide in the 2017 season when Kansas City’s Alex Gordon connected for one in the eighth inning on Tuesday night, topping the major-league record set in the 2000 season. The league is currently on pace for 6,140 homers. [ESPN]


Leaks from Slack

emily:

Sox going to extra innings again. 2nd day in a row.

colleen:

dammit, @neil, you caused this

neil:

I only caused it if they end up losing

neil:

by reminding them how lucky 14-3 is in extras

[The Red Sox won and were subsequently 15-3 in extra innings]


Predictions


Oh, and don’t forget
This could be it for Bautista, enjoy him while you can.

[syndicated profile] 538_feed

Posted by Perry Bacon Jr.

Everyone should have seen the Graham-Cassidy Obamacare repeal bill coming. But we didn’t.

Democrats had spent months defending the Affordable Care Act — and they appeared to have succeeded. So just over a week ago, a group of liberal members of the U.S. Senate rolled out their proposal to create a Medicare-for-all program. The group, led by Bernie Sanders, didn’t directly say, “We saved Obamacare, so now it’s time to move on to something even more liberal,” but that was the gist.

How did Democrats end up getting caught so flat-footed, putting out a single-payer proposal that essentially has no chance of becoming law until the White House changes hands while an effort to repeal one of the party’s signature achievements of the last decade gained strength? Because aside from Sens. Bill Cassidy of Louisiana and Lindsey Graham of South Carolina, basically everyone in Washington — Republicans, Democrats, the media — assumed the Obamacare repeal effort was dead. Two weeks ago, President Trump was suggesting that Republicans needed to give up on Obamacare repeal and focus on tax reform, Sen. Lamar Alexander of Tennessee was writing a bipartisan bill to fix Obamacare and Senate Republican leaders were downplaying the possibility that the Obamacare repeal effort would be revived.

So what happened?

Most importantly: Dean Heller of Nevada moved from a weak no to a firm yes — but no one really noticed.

The rise of Graham-Cassidy began on the afternoon of July 27 — hours before the Obamacare repeal effort seemed to die in the Senate. (GOP Sens. Susan Collins, John McCain and Lisa Murkowski formally voted down the “skinny” repeal after 1 a.m. on July 28.)

On that summer Thursday, Heller — who had been one of the Republican holdouts on a bunch of other Obamacare repeal proposals, arguing they cut Medicaid too deeplybecame a co-sponsor of the Graham-Cassidy bill. (Estimates suggest Graham-Cassidy will cut federal dollars going to states for health care by up to $400 billion from 2020-2026, much less than the more than $700 billion in estimated Medicaid cuts that were included in some of the proposals Heller opposed.)

It’s not totally clear why Heller signed on to Graham-Cassidy. He may have assumed it would never actually come up for a vote. He may have been worried about re-election: Republican donors in Nevada were reportedly warning Heller that they wouldn’t give him money for his 2018 re-election effort unless he backed Obamacare repeal, and Trump suggested he would oppose Heller in a GOP primary if the senator didn’t join the cause. Or perhaps Heller simply believes in the Graham-Cassidy model of health care policy reform, which would send most Obamacare funds back to states.

Either way, co-sponsoring the bill was an odd move for Heller, largely because he had previously suggested he would back only legislation that both preserved the expanded Medicaid funding Nevada had received through Obamacare and had the support of the state’s GOP governor, Brian Sandoval. Even in July, it was clear that Graham-Cassidy would likely reduce the number of federal dollars going to Nevada for Medicaid, which is further supported by recent estimates. Sandoval didn’t endorse the legislation back then, and this week he joined a bipartisan group of governors opposing it.

Whatever his reasons, Heller’s support was key, making the Senate math much easier for Cassidy and Graham. Back in July, only three GOP senators (Collins, Heller and Murkowski) had been strong opponents of the Obamacare repeal bills, voting down both the full repeal of Obamacare and a partial repeal largely written by Senate Republican Leader Mitch McConnell. (Of the 52 GOP senators, the other 49 voted for at least one of those two provisions.)

The last-ditch “skinny” repeal bill (which did not include Medicaid cuts) was widely expected to pass because Heller supported it, providing what was thought to be the crucial 50th vote. But at the last minute, his “no” vote was replaced by McCain’s.

In other words, at the end of July, Republicans still had two months left to repeal Obamacare and only two real, solid opponents of their repeal ideas: Collins and Murkowski. They were the only ones to vote against all versions of the repeal, though a number of their GOP colleagues had also said they were reluctant to support various bills. Despite expressing concerns about protecting Medicaid, Sens. Shelley Moore Capito of West Virginia, Jerry Moran of Kansas and Rob Portman of Ohio all eventually voted for a version of Obamacare repeal that would have cut Medicaid spending. So did McCain, who said some of his objections to the “skinny” repeal bill were about the process by which it had been written (without any Democratic input and without going through the traditional committees and hearings). Mike Lee of Utah and Rand Paul of Kentucky, two of most conservative GOP senators, had voted for “skinny” repeal, despite complaining that the Obamacare repeal proposals left much of the ACA in place.

So assuming Murkowski and Collins were the only real holdouts, Heller’s support gave the Obamacare repeal 50 votes — at least in theory.

Meanwhile, Cassidy and Graham spent much of August and early September touting their bill. Senate Republican leaders were not enthusiastic about coming back from their summer recess to face another attempt at an Obamacare repeal. Neither were rank-and-file senators. But no senator was actually saying, “I will vote against this bill if it comes to the floor.”

Fast forward to this week and it’s easy to see why Senate Republicans want to give Obamacare repeal a final try. Yes, McCain is a problem, because this bill is, like the July legislation, a GOP-only proposal written outside of the traditional committee process. And he demonstrated in July that he is not afraid to be the deciding vote against an Obamacare repeal.

But McCain has not really given any policy-driven reasons for voting this bill down. And Graham is a very close friend of his. He may still vote yes.

Paul ultimately backed the skinny repeal bill in July despite his early objections, so Republican leaders are probably betting that his threats to vote against this bill are also empty. That’s not an unreasonable assumption.

Collins and Murkowski still sound like “no” votes, and they consistently voted “no” before. But if Collins and Murkowski are the only noes, the Republicans can pass Graham-Cassidy. So look for Paul and McCain to get plenty of calls from the White House and fellow Republicans imploring them to back this legislation, and for the Democrats to back off talking about Medicare-for-all for a bit. In short, the GOP is exactly where it was at the end of July, but with much less time left to get a deal done.

[syndicated profile] 538_feed

Posted by Daniel Levitt

The NFL will take over London for the 18th time — and the 11th consecutive year — this weekend when the Baltimore Ravens take on veteran overseas travelers the Jacksonville Jaguars at Wembley Stadium. The game will be the first of four set in England this season, the most that have been played in a calendar year.

For the NFL, the additional game — there have been three in London each of the past three seasons — represents a concerted effort to expand the popularity and global reach of its brand.7 For the British, it’s another chance to watch lousy football.

It’s no secret that the teams that NFL commissioner Roger Goodell has sent have been overwhelmingly bad — and we aren’t just talking about the Jaguars. According to FiveThirtyEight’s pre-game Elo ratings, the harmonic mean of both teams’ ratings — a balanced measure of matchup quality that can better detect when both teams in a game are either good or bad — has been below average in 13 of the 17 games played in London.8 On top of that, all four games to be played in London this year will be below average, according to the team’s current Elo ratings.

London NFL games have been consistently below average

The harmonic mean of the Elo ratings of the teams in each matchup compared with 1500, roughly the rating of an average NFL team

YEAR DESIGNATED AWAY ELO DESIGNATED HOME ELO HARMONIC MEAN +/- AVERAGE
2014 Miami 1449 Oakland 1327 1385 -115
2015 Buffalo 1512 Jacksonville 1310 1404 -96
2017 Cleveland 1321 Minnesota 1501 1405 -95
2016 Indianapolis 1469 Jacksonville 1350 1407 -93
2010 Denver 1401 San Francisco 1418 1409 -91
2014 Dallas 1557 Jacksonville 1298 1416 -84
2013 San Francisco 1642 Jacksonville 1246 1417 -83
2007 N.Y. Giants 1553 Miami 1358 1449 -51
2013 Pittsburgh 1448 Minnesota 1477 1462 -38
2015 New York Jets 1478 Miami 1449 1463 -37
2017 Baltimore 1539 Jacksonville 1396 1464 -36
2014 Detroit 1541 Atlanta 1405 1470 -30
2017 Arizona 1529 L.A. Rams 1418 1471 -29
2015 Detroit 1432 Kansas City 1514 1472 -28
2016 N.Y. Giants 1466 L.A. Rams 1481 1473 -27
2017 New Orleans 1460 Miami 1519 1489 -11
2009 New England 1630 Tampa Bay 1375 1492 -8
2016 Washington 1509 Cincinnati 1525 1517 17
2012 New England 1678 St. Louis 1393 1522 22
2008 San Diego 1600 New Orleans 1470 1532 32
2011 Chicago 1543 Tampa Bay 1527 1535 35

All 2017 games are based on Elo ratings before Week 3.

The Jaguars are a big part of this, of course. Jacksonville has played in London four times, and the Elo rating of each of those four Jaguar teams ranks in the bottom five (among all 34 teams). Joining them in that bottom five are the 2014 Oakland Raiders. And it turns out that the Raiders’ game against the Miami Dolphins that year was the worst London matchup so far based on our Elo ratings. That game was so dreary that those Raiders, who fell to 0-4 after losing to Miami, fired their coach, Dennis Allen, not long after their plane touched down in the U.S. Perhaps by no coincidence, the Dolphins coach that year, Joe Philbin, would be fired the next season after starting 1-3. Philbin’s last game would be a loss to the Jets … in London.

But not every game played in London has been between NFL bottom feeders — sometimes a good team makes the trip (and, sure, plays a bottom feeder). The Brits have experienced Tom Brady and the New England Patriots twice, as well as the San Francisco 49ers the season after their latest Super Bowl appearance. But if you remove those three teams, the average London team,9 including this year’s Ravens and Jags, has an Elo rating of 1444. That’s roughly on par with this year’s 0-2 Cincinnati Bengals.

NFL fans will generally tune in regardless of who is playing. So perhaps the NFL’s intention was that the consistently poor quality of opponents would be scratched out by competitive, exciting contests. If that’s the case, the plan is generally working.

Blowout or bust

The point differential for regular-season NFL games played in London

YEAR DESIGNATED AWAY POINTS DESIGNATED HOME POINTS POINT DIFF WON BY ONE SCORE
2016 Washington 27 Cincinnati 27 0
2014 Detroit 22 Atlanta 21 1
2016 Indianapolis 27 Jacksonville 30 3
2007 New York Giants 13 Miami 10 3
2015 Buffalo 31 Jacksonville 34 3
2008 San Diego 32 New Orleans 37 5
2011 Chicago 24 Tampa Bay 18 6
2016 New York Giants 17 L.A. Rams 10 7
2013 Pittsburgh 27 Minnesota 34 7
2010 Denver 16 San Francisco 24 8
2015 New York Jets 27 Miami 14 13
2014 Dallas 31 Jacksonville 17 14
2014 Miami 38 Oakland 14 24
2009 New England 35 Tampa Bay 7 28
2013 San Francisco 42 Jacksonville 10 32
2015 Detroit 10 Kansas City 45 35
2012 New England 45 St. Louis Rams 7 38

Source: ESPN Stats & Information Group

Ten of the 17 games — or 59 percent — have been decided by one score. That might not sound so thrilling, but just 35 percent of all NFL games played since 2007 have been decided by 8 points or fewer. One of last year’s London games was so tightly matched, no one won it. (Fortunately for Cincinnati and Washington, they were playing in the one NFL location where fans are content with a tie.)

Low-quality games usually lead to drops in attendance toward the end of the season. Not in London, though. All but two games have attracted a crowd of more than 80,000, with the highest NFL London crowd at 84,488 — for last year’s tie at Wembley. To put that in context, that average draw would have been the second-highest home attendance of any team in the league last season (behind only the Dallas Cowboys).

As Goodell continues to push some of his most mediocre teams onto the international scene, it turns out that they’re rewarding fans with some of the league’s most competitive play.

[syndicated profile] 538_feed

Posted by Walt Hickey

You’re reading Significant Digits, a daily digest of the numbers tucked inside the news.


2 holdouts

With Nicaragua reportedly set to join the Paris climate accords — they held out in 2015 because the nation believed the deal didn’t go far enough — there are now only two holdouts from the landmark deal: Syria and the United States, which President Trump said would pull out of the agreement. [Bloomberg]


AB 485

A California bill awaiting the signature of Gov. Jerry Brown would outlaw puppy mills, banning pet stores from selling cats, dogs and bunnies that did not come from a shelter or rescue. [The New York Times]


1,772 episodes

This is easily the most staggering statistic I have come across while writing this column: There have been 1,772 individual episodes of HGTV’s “House Hunters” since it debuted in 1999. I could watch an episode of “House Hunters” every day for nearly five years without seeing a single repeat. When we’re just a radioactive cinder in the gaze of an expanding sun, whomever or whatever succeeds us will be able to say, “damn … they were good at finding and obtaining houses.” [Vulture]


80,000 311 calls

Hurricane Sandy left an indelible mark on New York City, and the effects of the storm can still be seen and felt years later. More than 36 million calls were placed to NYC’s 311 service from just before Sandy hit in late 2012 through earlier this week. Nearly 80,000 of them were related to the storm. And the tail is super long — 142 such calls were made in 2017 (as of Monday). [FiveThirtyEight]


3.5 million people

Hurricane Maria has left the entire island of Puerto Rico and its 3.5 million residents without power. That’s to say nothing of flooding and other destruction. Maria, now a Category 3 storm, is currently hitting the Dominican Republic. [BBC]


$31.4 million

Russian trade with North Korea more doubled to $31.4 million in the first quarter of 2017. Reuters found eight North Korean fuel ships that left Russia ostensibly en route to China or South Korea only to change their final destination to North Korea. [Reuters]


Like Significant Digits? Like sports? You’ll love Besides the Points, our new sports newsletter.

If you see a significant digit in the wild, send it to @WaltHickey.

The Media Has A Probability Problem

Sep. 21st, 2017 09:47 am
[syndicated profile] 538_feed

Posted by Nate Silver

This is the 11th and final article in a series that reviews news coverage of the 2016 general election, explores how Donald Trump won and why his chances were underrated by most of the American media.

Two Saturday nights ago, just as Hurricane Irma had begun its turn toward Florida, the Associated Press sent out a tweet proclaiming that the storm was headed toward St. Petersburg and not its sister city Tampa, just 17 miles to the northeast across Tampa Bay.

Hurricane forecasts have improved greatly over the past few decades, becoming about three times more accurate at predicting landfall locations. But this was a ridiculous, even dangerous tweet: The forecast was nowhere near precise enough to distinguish Tampa from St. Pete. For most of Irma’s existence, the entire Florida peninsula had been included in the National Hurricane Center’s “cone of uncertainty,” which covers two-thirds of possible landfall locations. The slightest change in conditions could have had the storm hitting Florida’s East Coast, its West Coast, or going right up the state’s spine. Moreover, Irma measured hundreds of miles across, so even areas that weren’t directly hit by the eye of the storm could have suffered substantial damage. By Saturday night, the cone of uncertainty had narrowed, but trying to distinguish between St. Petersburg and Tampa was like trying to predict whether 31st Street or 32nd Street would suffer more damage if a nuclear bomb went off in Manhattan.

To its credit, the AP deleted the tweet the next morning. But the episode was emblematic of some of the media’s worst habits when covering hurricanes — and other events that involve interpreting probabilistic forecasts. Before a storm hits, the media demands impossible precision from forecasters, ignoring the uncertainties in the forecast and overhyping certain scenarios (e.g. the storm hitting Miami) at the expense of other, almost-as-likely ones (e.g. the storm hitting Marco Island). Afterward, it casts aspersions on the forecasts unless they happened to exactly match the scenario the media hyped up the most.

Indeed, there’s a fairly widespread perception that meteorologists performed poorly with Irma, having overestimated the threat to some places and underestimated it elsewhere. Even President Trump chimed in to say the storm hadn’t been predicted well, tweeting that the devastation from Irma had been “far greater, at least in certain locations, than anyone thought.” In fact, the Irma forecasts were pretty darn good: Meteorologists correctly anticipated days in advance that the storm would take a sharp right turn at some point while passing by Cuba. The places where Irma made landfall — in the Caribbean and then in Florida — were consistently within the cone of uncertainty. The forecasts weren’t perfect: Irma’s eye wound up passing closer to Tampa than to St. Petersburg after all, for example. But they were about as good as advertised. And they undoubtedly saved a lot of lives by giving people time to evacuate in places like the Florida Keys.

The media keeps misinterpreting data — and then blaming the data

You won’t be surprised to learn that I see a lot of similarities between hurricane forecasting and election forecasting — and between the media’s coverage of Irma and its coverage of the 2016 campaign. In recent elections, the media has often overestimated the precision of polling, cherry-picked data and portrayed elections as sure things when that conclusion very much wasn’t supported by polls or other empirical evidence.

As I’ve documented throughout this series, polls and other data did not support the exceptionally high degree of confidence that news organizations such as The New York Times regularly expressed about Hillary Clinton’s chances. (We’ve been using the Times as our case study throughout this series, both because they’re such an important journalistic institution and because their 2016 coverage had so many problems.) On the contrary, the more carefully one looked at the polling, the more reason there was to think that Clinton might not close the deal. In contrast to President Obama, who overperformed in the Electoral College relative to the popular vote in 2012, Clinton’s coalition (which relied heavily on urban, college-educated voters) was poorly configured for the Electoral College. In contrast to 2012, when hardly any voters were undecided between Obama and Mitt Romney, about 14 percent of voters went into the final week of the 2016 campaign undecided about their vote or saying they planned to vote for a third-party candidate. And in contrast to 2012, when polls were exceptionally stable, they were fairly volatile in 2016, with several swings back and forth between Clinton and Trump — including the final major swing of the campaign (after former FBI Director James Comey’s letter to Congress), which favored Trump.

By Election Day, Clinton simply wasn’t all that much of a favorite; she had about a 70 percent chance of winning according to FiveThirtyEight’s forecast, as compared to 30 percent for Trump. Even a 2- or 3-point polling error in Trump’s favor — about as much as polls had missed on average, historically — would likely be enough to tip the Electoral College to him. While many things about the 2016 election were surprising, the fact that Trump narrowly won10 when polls had him narrowly trailing was an utterly routine and unremarkable occurrence. The outcome was well within the “cone of uncertainty,” so to speak.

So if the polls called for caution rather than confidence, why was the media so sure that Clinton would win? I’ve tried to address that question throughout this series of essays — which we’re finally concluding, much to my editor’s delight.11

Probably the most important problem with 2016 coverage was confirmation bias — coupled with what you might call good old-fashioned liberal media bias. Journalists just didn’t believe that someone like Trump could become president, running a populist and at times also nationalist, racist and misogynistic campaign in a country that had twice elected Obama and whose demographics supposedly favored Democrats. So they cherry-picked their way through the data to support their belief, ignoring evidence — such as Clinton’s poor standing in the Midwest — that didn’t fit the narrative.

But the media’s relatively poor grasp of probability and statistics also played a part: It led them to misinterpret polls and polling-based forecasts that could have served as a reality check against their overconfidence in Clinton.

How a probabilistic election forecast works — and how it can be easy to misinterpret

The idea behind an election forecast like FiveThirtyEight’s is to take polls (“Clinton is ahead by 3 points”) and transform them into probabilities (“She has a 70 percent chance of winning”). I’ve been designing and publishing forecasts like these for 15 years12 in two areas (politics and sports) that receive widespread public attention. And I’ve found there are basically two ways that things can go wrong.

First, there are errors of analysis. As an example, if you had a model of last year’s election that concluded that Clinton had a 95 or 99 percent chance of winning, you committed an analytical error.13 Models that expressed that much confidence in her chances had a host of technical flaws, such as ignoring the correlations in outcomes between states.14

But while statistical modeling may not always hit the mark, people’s subjective estimates of how polls translate into probabilities are usually even worse. Given a complex set of polling data — say, the Democrat is ahead by 3 points in Pennsylvania and Michigan, tied in Florida and North Carolina, and down by 2 points in Ohio — it’s far from obvious how to figure out the candidate’s chances of winning the Electoral College. Ad hoc attempts to do so can lead to problematic coverage like this article that appeared in the The New York Times last Oct. 31, three days after Comey had sent his letter to Congress:

Mrs. Clinton’s lead over Mr. Trump appears to have contracted modestly, but not enough to threaten her advantage over all or to make the electoral math less forbidding for Mr. Trump, Republicans and Democrats said. […]

The loss of a few percentage points from Mrs. Clinton’s lead, and perhaps a state or two from the battleground column, would deny Democrats a possible landslide and likely give her a decisive but not overpowering victory, much like the one President Obama earned in 2012. […]

You’ll read lots of clips like this during an election campaign, full of claims about the “electoral math,” and they often don’t hold up to scrutiny. In this case, the article’s assertion that the loss of “a few percentage points” wouldn’t hurt Clinton’s chances of victory was wrong, and not just in hindsight; instead, the Comey letter made Clinton much more vulnerable, roughly doubling Trump’s probability of winning.

But even if you get the modeling right, there’s another whole set of problems to think about: errors of interpretation and communication. These can run in several different directions. Consumers can misunderstand the forecasts, since probabilities are famously open to misinterpretation. But people making the forecasts can also do a poor job of communicating the uncertainties involved. For example, although weather forecasters are generally quite good at describing uncertainty, the cone of uncertainty is potentially problematic because viewers might not realize it represents only two-thirds of possible landfall locations.

Intermediaries — other people describing a forecast on your behalf — can also be a problem. Over the years, we’ve had many fights with well-meaning TV producers about how to represent FiveThirtyEight’s probabilistic forecasts on air. (We don’t want a state where the Democrat has only a 51 percent chance to win to be colored in solid blue on their map, for instance.) And critics of statistical forecasts can make communication harder by passing along their own misunderstandings to their readers. After the election, for instance, The New York Times’ media columnist bashed the newspaper’s Upshot model (which had estimated Clinton’s chances at 85 percent) and others like it for projecting “a relatively easy victory for Hillary Clinton with all the certainty of a calculus solution.” That’s pretty much exactly the wrong way to describe such a forecast, since a probabilistic forecast is an expression of uncertainty. If a model gives a candidate a 15 percent chance, you’d expect that candidate to win about one election in every six or seven tries. You wouldn’t expect the fundamental theorem of calculus to be wrong … ever.

I don’t think we should be forgiving of innumeracy like this when it comes from prominent, experienced journalists. But when it comes to the general public, that’s a different story — and there are plenty of things for FiveThirtyEight and other forecasters to think about in terms of our communication strategies. There are many potential avenues for confusion. People associate numbers with precision, so using numbers to express uncertainty in the form of probabilities might not be intuitive. (Listing a decimal place in our forecast, as FiveThirtyEight historically has done — e.g. 28.6 percent chance rather than 29 percent or 30 — probably doesn’t help in this regard.) Also, both probabilities and polls are usually listed as percentages, so people can confuse one for the other — they might mistake a forecast showing Clinton with a 70 percent chance of winning as meaning she has a 70-30 polling lead over Trump, which would put her on her way to a historic, 40-point blowout.15

What can also get lost is that election forecasts — like hurricane forecasts — represent a continuous range of outcomes, none of which is likely to be exactly right. The following diagram is an illustration that we’ve used before to show uncertainty in the FiveThirtyEight forecast. It’s a simplification — showing a distribution for the national popular vote only and which candidate wins the Electoral College.16 Still, the diagram demonstrates several important concepts for interpreting polls and forecasts:

  • First, as I mentioned, no exact outcome is all that likely. If you rounded the popular vote to the nearest whole number, the most likely outcome was Clinton winning by 4 percentage points. Nonetheless, the chance that she’d win by exactly 4 points17 was only about 10 percent. “Calling” every state correctly in the Electoral College is even harder. FiveThirtyEight’s model did it in 2012 — in a lucky break18 that may have given people a false impression about how easy it is to forecast elections — but we estimated that the chances of having a perfect forecast again in 2016 were only about 2 percent. Thus, properly measuring the uncertainty is at least as important a part of the forecast as plotting the single most likely course. You’re almost always going to get something “wrong” — so the question is whether you can distinguish the relatively more likely upsets from the relatively less likely ones.
  • Second, the distribution of possible outcomes was fairly wide last year. The distribution is based on how accurate polls of U.S. presidential elections have been since 1972, accounting for the number of undecideds and the number of days until the election. The distribution was wider than usual because there were a lot of undecided voters — and more undecided voters mean more uncertainty. Even in a normal year, however, the polls aren’t quite as precise as most people assume.
  • Third, the forecast is continuous, rather than binary. When evaluating a poll or a polling-based forecast, you should look at the margin between the poll and the actual result and not just who won and lost. If a poll showed the Democrat winning by 1 point and the Republican won by 1 point instead, the poll did a better job than if the Democrat had won by 9 points (even though the poll would have “called” the outcome correctly in the latter case). By this measure, polls in this year’s French presidential election — which Emmanuel Macron was predicted to win by 22 points but actually won by 32 points — were much worse than polls of the 2016 U.S. election.
  • Finally, the actual outcome in last year’s election was right in the thick of the probability distribution, not out toward the tails. The popular vote was obviously pretty close to what the polls estimated it would be. It also wasn’t that much of a surprise that Trump won the Electoral College, given where the popular vote wound up. (Our forecast gave Trump a better than a 25 percent chance of winning the Electoral College conditional on losing the popular vote by 2 points,19 an indication of his demographic advantages in the swing states.) One might dare even say that the result last year was relatively predictable, given the range of possible outcomes.

The press presumed that Clinton would win, but the public saw a close race

I’ve often heard it asserted that the widespread presumption of an inevitable Clinton victory was itself a problem for her campaign20 — Clinton has even made a version of his claim herself. So we have to ask: Could this misreading of the polls — and polling-based forecasts — actually have affected the election’s outcome?

It depends on whether you’re talking about how the media and other political elites read the polls — and how that influenced their behavior — or how the general public did. Regular voters, it turns out, were not especially confident about Clinton’s chances last year. For instance, in the final edition of the USC Dornsife/Los Angeles Times tracking poll, which asked voters to guess the probability of Trump and Clinton winning the election, the average voter gave Clinton only a 53 percent chance of winning and gave Trump a 43 percent chance — so while respondents slightly favored Clinton, it wasn’t with much confidence at all.

The American National Election Studies also asked voters to predict the most likely winner of the race, as it’s been doing since 1952. It found that 61 percent of voters expected Clinton to win, as compared to 33 percent for Trump.21 This proportion is about the same as other years — such as 2004 — in which polls showed a fairly close race, although one candidate (in that case, George W. Bush) was usually ahead. While, unlike the LA Times poll, the ANES did not ask voters to estimate the probability of Clinton winning, it did ask voters a follow-up question about whether they expected the election to be close or thought one of the candidates would “win by quite a bit.” Only 20 percent of respondents predicted a Clinton landslide, and only 7 percent expected a Trump landslide. Instead, almost three-quarters of voters correctly predicted a close outcome.

Voters weren’t overly bullish on Clinton’s chances

Confidence in each party’s presidential candidate in the months before elections

WHICH PARTY WILL WIN THE PRESIDENCY?
DEM. REP.
2016 61%
33% ✓
2012 ✓ 64
25
2008 ✓ 59
30
2004 29
62 ✓
2000 47
44 ✓
1996 ✓ 86
10
1992 ✓ 56
31
1988 23
63 ✓
1984 12
81 ✓
1980 46
38 ✓
1976 ✓ 43
41
1972 7
83 ✓
1968 22
57 ✓
1964 ✓ 81
8
1960 ✓ 33
43
1956 19
68 ✓
1952 35
43 ✓

Source: American National Election Studies

So be wary if you hear people within the media bubble22 assert that “everyone” presumed Clinton was sure to win. Instead, that presumption reflected elite groupthink — and it came despite the polls as much as because of the polls. There was a bewilderingly large array of polling data during last year’s campaign, and it didn’t always tell an obvious story. During the final week of the campaign, Clinton was ahead in most polls of most swing states, but with quite a few exceptions23 — and many of Clinton’s leads were within the margin of error and had been fading during the final 10 days of the campaign. The public took in this information and saw Clinton as the favorite, but they didn’t expect a blowout and viewed the outcome as highly uncertain. Our model read it the same way. The media looked at the same ambiguous data and saw what they wanted in it, using it confirm their presumption that Trump couldn’t win.

News organizations learned the wrong lessons from 2012

During the 2012 election, FiveThirtyEight’s forecast consistently gave Obama better odds of winning re-election than the conventional wisdom did. Somehow in the midst of it, I became an avatar for projecting certainty in the face of doubt. But this role was always miscast — even quite opposite of what I hope readers take away from FiveThirtyEight’s work. In addition to making my own forecasts, I’ve spent a lot of my life studying probability and uncertainty. Cover these topics for long enough and you’ll come to a fairly clear conclusion: When it comes to making predictions, the world usually needs less certainty, not more.

A major takeaway from my book and from other people’s research on prediction is that most experts — including most journalists — make overconfident forecasts. (Weather forecasters are an important exception.) Events that experts claim to be nearly certain (say, a 95 percent probability) are often merely probable instead (the real probability is, say, 70 percent). And events they deem to be nearly impossible occur with some frequency. Another, related type of bias is that experts don’t change their minds quickly enough in the face of new information,24 sticking stubbornly to their previous beliefs even after the evidence has begin to mount against them.

Media coverage of major elections had long been an exception to this rule of expert overconfidence. For a variety of reasons — no doubt including the desire to inject drama into boring races — news coverage tended to overplay the underdog’s chances in presidential elections and to exaggerate swings in the polls. Even in 1984, when Ronald Reagan led Walter Mondale by 15 to 20 percentage points in the stretch run of the campaign, The New York Times somewhat credulously reported on Mondale’s enthusiastic crowds and talked up the possibility of a Dewey-defeats-Truman upset. The 2012 election — although it was a much closer race than 1984 — was another such example: Reporting focused too much on national polls and not enough on Obama’s Electoral College advantage, and thus portrayed the race as a “toss-up” when in reality Obama was a reasonably clear favorite. (FiveThirtyEight’s forecast gave Obama about a 90 percent chance of winning re-election on election morning.)

Since then, the pendulum has swung too far in the other direction, with the media often expressing more certainty about the outcome than is justified based on the polls. In addition to lowballing the chances for Trump, the media also badly underestimated the probability that the U.K. would leave the European Union in 2016, and that this year’s U.K. general election would result in a hung parliament, for instance. There are still some exceptions — the conventional wisdom probably overestimated Marine Le Pen’s chances in France. Nonetheless, there’s been a noticeable shift from the way elections used to be covered, and it’s worth pausing to consider why that is.

One explanation is that news organizations learned the wrong lessons from 2012. The “moral parable” of 2012, as Scott Alexander wrote, is that Romney was “the arrogant fool who said that all the evidence against him was wrong, but got his comeuppance.” Put another way, the lesson of 2012 was to “trust the data,” especially the polls.

FiveThirtyEight and I became emblems of that narrative, even though we sometimes tried to resist it. What I think people forget is that the confidence our model expressed in Obama’s chances in 2012 was contingent upon circumstances peculiar to 2012 — namely that Obama had a much more robust position in the Electoral College than national polls implied, and that there were very few undecided voters, reducing uncertainty. The 2012 election may have superficially looked like a toss-up, but Obama was actually a reasonably clear favorite. Pretty much the opposite was true in 2016 — the more carefully one evaluated the polls, the more uncertain the outcome of the Electoral College appeared. The real lesson of 2012 wasn’t “always trust the polls” so much as “be rigorous in your evaluation of the polls, because your superficial impression of them can be misleading.”

Another issue is that uncertainty is a tough sell in a competitive news environment. “The favorite is indeed favored, just not by as much as everyone thinks once you look at the data more carefully, so bet on the favorite at even money but the underdog against the point spread” isn’t that complicated a story, but it can be a difficult message to get across on TV in the midst of an election campaign when everyone has the attention span of a sugar-high 4-year-old. It can be even harder on social media, where platforms like Facebook reward simplistic coverage that confirms people’s biases.

Journalists should be wary of ‘the narrative’ and more transparent about their provisional understanding of developing stories

But every news organization faced competitive pressure in covering last year’s election — and only some of them screwed up the story. Editorial culture mattered a lot. In general, the problems were worse at The New York Times and other organizations that (as Michael Cieply, a former Times editor, put it) heavily emphasized “the narrative” of the campaign and encouraged reporters to “generate stories that fit the pre-designated line.”

If you re-read the Times’ general election coverage from the conventions onward,25 you’ll be struck by how consistent it was from start to finish. Although the polls were fairly volatile in 2016, you can’t really distinguish the periods when Clinton had a clear advantage from those when things were pretty tight. Instead, the narrative was consistent: Clinton was a deeply flawed politician, the “worst candidate Democrats could have run,” cast in “shadows” and “doubts” because of her ethical lapses. However, she was almost certain to win because Trump appealed to too narrow a range of demographic groups and ran an unsophisticated campaign, whereas Clinton’s diverse coalition and precise voter-targeting efforts gave her an inherent advantage in the Electoral College.

It was a consistent story, but it was consistently wrong.

One can understand why news organizations find “the narrative” so tempting. The world is a complicated place, and journalists are expected to write authoritatively about it under deadline pressure. There’s a management consulting adage that says when creating a product, you can pick any two of these three objectives: 1. fast, 2. good and 3. cheap. You can never have all three at once. The equivalent in journalism is that a story can be 1. fast, 2. interesting and/or 3. true — two out of the three — but it’s hard for it to be all three at the same time.

Deciding on the narrative ahead of time seems to provide a way out of the dilemma. Pre-writing substantial portions of the story — or at least, having a pretty good idea of what you’re going to say — allows it to be turned around more quickly. And narratives are all about wrapping the story up in a neat-looking package and telling readers “what it all means,” so the story is usually engaging and has the appearance of veracity.

The problem is that you’re potentially sacrificing No. 3, “true.” By bending the facts to fit your template, you run the risk of getting the story completely wrong. To make matters worse, most people — including most reporters and editors (also: including me) — have a strong tendency toward confirmation bias. Presented with a complicated set of facts, it takes a lot of work for most of us not to connect the dots in a way that confirms our prejudices. An editorial culture that emphasizes “the narrative” indulges these bad habits rather than resists them.

Instead, news organizations reporting under deadline pressure need to be more comfortable with a world in which our understanding of developing stories is provisional and probabilistic — and will frequently turn out to be wrong. FiveThirtyEight’s philosophy is basically that the scientific method, with its emphasis on verifying hypotheses through rigorous analysis of data, can serve as a model for journalism. The reason is not because the world is highly predictable or because data can solve every problem, but because human judgment is more fallible than most people realize — and being more disciplined and rigorous in your approach can give you a fighting chance of getting the story right. The world isn’t one where things always turn out exactly as we want them to or expect them to. But it’s the world we live in.

CORRECTION (Sept. 21, 2:40 p.m.): A previous version of footnote No. 10 mistakenly referred to the Electoral College in place of the national popular vote when discussing Trump’s chances of winning the election. The article has been updated.

The Good Place: Season 2, Episode 1

Sep. 21st, 2017 12:32 pm
rachelmanija: (Default)
[personal profile] rachelmanija
Absolutely fantastic. Do not click on cut unless you've already seen it. The whole series is streaming on nbc.com.

Read more... )

Profile

emgeetrek: (Default)
emgeetrek

January 2010

S M T W T F S
     12
3456789
10111213141516
171819 20212223
24252627282930
31      

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Sep. 22nd, 2017 01:43 pm
Powered by Dreamwidth Studios