But we’ve already covered these modeling issues at length both before and after the election, so I won’t dwell on them quite as much here. That’s because we spent a lot of time last spring and summer reflecting on the nomination campaign. Meanwhile, he beat his polls by only 2 to 3 percentage points in the average swing state.3 Certainly, there were individual pollsters that had some explaining to do, especially in Michigan, Wisconsin and Pennsylvania, where Trump beat his polls by a larger amount. So here’s how we’ll proceed. After Trump’s victory, the various academics and journalists who’d built models to estimate the election odds engaged in detailed self-assessments of how their forecasts had performed. While FiveThirtyEight’s final “polls-only” forecast gave Trump a comparatively generous 3-in-10 chance (29 percent) of winning the Electoral College, it was somewhat outside the consensus, with some other forecasts showing Trump with less than a 1 in 100 shot. Nathan J. Robinson. So did many of the statistical models of the campaign, of course. Donald Trump Had A Superior Electoral College Strategy, Clinton’s Ground Game Didn’t Cost Her The Election, Why You Shouldn’t Always Trust The Inside Scoop, The Comey Letter Probably Cost Clinton The Election, individual pollsters that had some explaining to do, might somehow be a good thing for the media, misinterpretation and misreporting of the polls is a major part of the story, won the popular vote by more than 2.8 million votes, extensively on Clinton’s potential gains with Hispanic voters, indications of a decline in African-American turnout, heralded the Clinton campaign’s savviness, biggest popular vote-versus-Electoral College discrepancy, often scolded the media for overrating Trump’s chances, underestimated the extent to which polling errors were correlated from state to state, impact of white voters without college degrees, Politics Podcast: Trump Vs. Its founder, Nate Silver, noted the model's prediction matched results shown in FiveThirtyEight's final forecast prior to Election Day in 2016, … Since the logistic regression is a better choice, I’ll assume he is using that. Then I’ll have some concluding thoughts. U.S. Nate Silver Polls 2020 Election Politics. My view is that we had lots of problems, but that we got most of them out of the way good and early by botching our assessment of Trump’s chances of winning the Republican primary. This is the question I’ve spent the past two to three months thinking about. I think it’s important to single out examples of better and worse coverage, as opposed to presuming that news organizations didn’t have any choice in how they portrayed the race, or bashing “the media” at large. Another myth is that Trump’s victory represented some sort of catastrophic failure for the polls. And if almost everyone got the first draft of history wrong in 2016, perhaps there’s still time to get the second draft right. Technically speaking, Trump ended the day on July 30 with a 50.1 percent chance of winning in our polls-only forecast. Election post-mortems by major news organizations have tended to skirt past how much importance they attached to FBI Director James Comey’s letter to Congress on Oct. 28, for instance, and how much the polls shifted toward Trump in the immediate aftermath of Comey’s letter. Specifically, it will be stories published by the Times’s political desk (as opposed to by its investigations team, in its editorial pages or by its data-oriented subsite, The Upshot). 538's Final 2016 Forecast Silver did have many words of caution in his Final Election Update on November 8, 2016. Updated Nov. 8, 2016. Updated Nov. 9, 2016. Silver has spoken in the past about how Silver's forecasts would anger their sources, including some in the Romney camp during the 2012 election. To some of you, a forecast that showed Trump with about a 30 percent chance of winning when the consensus view was that his chances were around 15 percent6 will self-evidently seem smart. And the Times, like the Clinton campaign, largely ignored Michigan and Wisconsin. But they won’t be easy to correct unless journalists’ incentives or the culture of political journalism change. Ground rule No. Our outlook today in our final forecast of the year. It’s much easier to blame the polls for the failure to foresee the outcome, or the Clinton campaign for blowing a sure thing. You can find our self-critique of our primary coverage here. We’re forecasting the election with three models. The first half will cover what I view as technical errors, while the second half will fall under the heading of journalistic errors and cognitive biases. (If Clinton had won Michigan and Wisconsin, she’d still have only 258 electoral votes.4 To beat Trump, she’d have also needed a state such as Pennsylvania or Florida where she campaigned extensively.) Articles commissioned by the Times’s political desk regularly asserted that the Electoral College was a strength for Clinton, when in fact it was a weakness. An article it published on Nov. 1 smartly focused on, Elsewhere at the Times, Nate Cohn at The Upshot provided a number of excellent analyses, including a Sept. 20 article that, And from the start of the general election onward, Sean Trende at RealClearPolitics. Most of these mistakes were replicated by other mainstream news organizations, and also often by empirically minded journalists and model-builders. His name is not Nate Silver or Sam Wang or Nate Cohn. They also suggest there are real shortcomings in how American politics are covered, including pervasive groupthink among media elites, an unhealthy obsession with the insider’s view of politics, a lack of analytical rigor, a failure to appreciate uncertainty, a sluggishness to self-correct when new evidence contradicts pre-existing beliefs, and a narrow viewpoint that lacks perspective from the longer arc of American history. When FiveThirtyEight Editor-in-Chief Nate Silver is not busy getting election predictions wrong, he tweets things such as this largely irrelevant statistical observation about new COVID-19 cases: This average reflects some states (such as Wisconsin) where Trump beat his polls by more than 2.7 points, along with others (such as Nevada) where Clinton beat her polls. Never mind, for a moment, that these states wouldn’t have been enough to change the overall result. Independent evaluations also judged FiveThirtyEight’s forecast to be the most accurate (or perhaps better put, the least inaccurate) of the models. WATCH: SNL Cold Open Tackles Halloween, 2016, and Tuesday’s Election: ‘This Daylight Saving Time, Let’s Gain an Hour and Lose a President!’ ... FiveThirtyEight’s Nate Silver … Nate Silver’s FiveThirtyEight uses statistical analysis — hard numbers — to tell compelling stories about elections, politics, sports, science, economics and lifestyle. Throughout the campaign, the polls had hallmarks of high uncertainty, indicating a volatile election with large numbers of undecided voters. The technical errors ought to be easier to fix, but they have narrower applications.8 The cognitive biases reflect more deep-seated problems and have more implications for how Trump’s presidency will be covered; they’re also the root cause of some of the technical errors. � �ks�u��ݿ"I1�չ���P�$�-��Q�${N�P�" u��T����>k�@S => Specifically, Trump beat his FiveThirtyEight adjusted polling average by a net of 2.7 percentage points in the average state, weighted by the state’s likelihood of being the tipping-point state. (Media consolidation may itself be a part of the reason that Trump’s chances were underestimated, insofar as it contributed to groupthink about his chances.) Polling (424) ?��/O���ſ=��~���W������z�:��Ϟ�쵟>8{���ϯ�~{~yr���~��w�޿tf�>����ڣ���|����{���=�G����ٳ���y7�7?������.��O��X�/���髓����>?��������'^L������~r���������L�΋�c{��t��hw�j�;�~v��oϿ�>�__��z֏�N������Ϟ�=ʱԟ��!�'�2/����Y�Ύ�H�xT�~��O��I��˭�����^x� ɞ��t���hw���|u�'϶Ov��m����R�x��`~r~r�y~��Mp����rw�o������G���k/�x��Q��D��~�'A��2�W�^mo�v��ξa��ܗǏ>�>�����i�ոԶĚװ�>c�Ov��]כw���MXo��7�ӒZ 1�;6�|���Zn�~b����|���mϏ�>��?m�?����-��_�Ƅ����{z�{�{y�]o�^{����� j:;? Some of the models were based only on the past few elections, ignoring earlier years, such as 1980, when the polling had been way off. I want to lay down a few ground rules for how this series of articles will proceed — but first, a few words about FiveThirtyEight’s coverage of Trump. Why, then, had so many people who covered the campaign been so confident of Clinton’s chances? Still, when Democrats saw Trump win states like Florida and Ohio after Biden had jumped out to early leads, it undoubtedly brought back memories of the 2016 election. Nate Silver . While it’s challenging to judge a probabilistic forecast on the basis of a single outcome, we have no doubt that we got the Republican primary “wrong.”. It looks similar for Biden — around a 3-point gap. Meaning: coverage of campaign tactics and the Electoral College, polls and forecasts, demographics and other data, and the causes of Trump’s eventual defeat of Hillary Clinton. — -- Election forecaster Nate Silver said on Sunday that Hillary Clinton is the clear favorite to be the next president but argued the race is closer than most analysts are anticipating. Here are just a few examples of excellent horse-race reporting that my colleagues and I learned something from at FiveThirtyEight. We’ll release these a couple of articles at a time over the course of the next few weeks, adding links as we go along. There’s obviously a lot to criticize in how certain statistical models were designed, for instance. Trump outperformed his national polls by only 1 to 2 percentage points in losing the popular vote to Clinton, making them slightly closer to the mark than they were in 2012. Some people might confuse logistic regression and a binomial GLM with a logistic link, but they aren’t the same. If you go back and check our coverage, you’ll see that most of these points are things that FiveThirtyEight (and sometimes also other data-friendly news sites) raised throughout the campaign. Nate Silver describes rivalry in election … One nice thing about statistical forecasts is that they don’t leave a lot of room for ambiguity. It mostly contradicts the way they covered the election while it was underway (when demographics were often assumed to provide Clinton with an Electoral College advantage, for instance). Moreover, we “leaned into” this view in the tone and emphasis of our articles, which often scolded the media for overrating Trump’s chances. Nate Silver's predictions and polling data for the 2016 presidential election between Hillary Clinton and Donald Trump. What Nate Silver is trying to do by criticizing other pollsters is limit his competition. Sorry, your blog cannot share posts by email. In the week leading up to Election Day, Clinton was only barely ahead in the states she’d need to secure 270 electoral votes. President. At moments when the polls showed the race tightening, meanwhile, reporters frequently focused on other factors, such as early voting and Democrats’ supposedly superior turnout operation, as reasons that Clinton was all but assured of victory. It’s going to be a lot of 2016, at the same time we’re also covering what’s sure to be a tumultuous 2017. On Election Day, Trump’s chances were 18 percent according to betting markets and 11 percent based on the average of six forecasting models tracked by The New York Times, so 15 percent seems like a reasonable reflection of the consensus evidence. As you read these, keep in mind this is mostly intended as a critique of 2016 coverage in general, using The New York Times as an example, as opposed to a critique of the Times in particular. In other cases, the conventional wisdom has flip-flopped without journalists pausing to consider why they got the story wrong in the first place. Each one will form the basis for a short article that reveals what I view as a significant error in how 2016 was covered. What exactly, then, is the “right” story for how Trump won the election? If you’d published a model that put Trump’s chances at 10 percent, for example, you could defend that as having been a reasonable forecast given the data available to you, or you could say the result had revealed a flaw in the model. We’re currently planning on about a dozen of these articles — the idea is to be comprehensive — grouped into two broad categories. @natesilver538, Donald Trump (1447 posts) Something like the opposite was true in the general election, in our view. Obviously, I’m mostly taking a critical focus here, but in the footnotes you can find a list of examples of outstanding horse-race stories — articles that sagely used reporting and analysis to scrutinize the conventional wisdom that Clinton was the inevitable winner.7. The table below contains some important examples of this. The most obvious error, given that Clinton won the popular vote by more than 2.8 million votes, is that they frequently mistook Clinton’s weakness in the Electoral College for being a strength. Call me a curmudgeon, but I think we journalists ought to spend a few more moments thinking about these things before we endorse the cutely contrarian idea that Trump’s presidency might somehow be a good thing for the media. I’ve clipped a number of representative snippets from the Times’s coverage of the campaign from the conventions onward. It’s a somewhat fuzzy distinction, but important for what lessons might be drawn from them. Among our mistakes: That forecast wasn’t based on a statistical model, it relied too heavily on a single theory of the nomination campaign (“The Party Decides”), and it didn’t adjust quickly enough when the evidence didn’t fit our preconceptions about the race. On Friday at noon, a Category 5 political cyclone that few journalists saw coming will deposit Donald Trump atop the Capitol Building, where he’ll be sworn in as the 45th president of the United States. ... Biden would still "probably hold on" and win key states that Hillary Clinton lost in 2016 by narrow margins. By contrast, some traditional reporters and editors have built a revisionist history about how they covered Trump and why he won. At the same time, a relatively small group of journalists and news organizations, including the Times, has a disproportionate amount of influence on how political events are understood by large segments of the American public. Election statistics gurus Nate Silver and Nate Cohn, who run the data analysis sites FiveThirtyEight and The New York Times’ Upshot, respectively, were quick to … Of all people, Nate Silver should probably not have been gloating the morning after Election Day. They also focused extensively on Clinton’s potential gains with Hispanic voters, but less on indications of a decline in African-American turnout. Several of the models were too slow to recognize meaningful shifts in the polls, such as the one that occurred after the Comey letter on Oct. 28. It’s tempting to use the inauguration as an excuse to finally close the chapter on the 2016 election and instead turn the page to the four years ahead. He also led in our “now-cast” at various points in time, but the now-cast was intended as a projection of a hypothetical election held that day rather than the Nov. 8 outcome. That may still largely be true for local reporters, but at the major national news outlets, campaign correspondents rarely stick to just-the-facts reporting (“Hillary Clinton held a rally in Des Moines today”). 'FiveThirtyEight' Statistician Nate Silver Reports On The 2016 Election Silver analyzes polls and predicts election outcomes on his website, FiveThirtyEight. He makes the case for either a large or small impact, and leans personally to a small one, which dropped her lead in swing states from 4.5 points to just 1.7 points a couple days before the election. Not accounting for defections from faithless electors. 1: These articles will focus on the general election. The focus on conventional journalism in this article is not meant to imply that data journalists got everything right, however. But the election is too important a story for journalists to just shrug and move on from — or worse, to perpetuate myths that don’t reflect the reality of how history unfolded. [��_��1��n���7���K翌_������cZ/.��E:cdw۷~�]F7��. Not all of these assessments were mea culpas — ours emphatically wasn’t (more about that in a moment) — but they at least grappled with the reality of what the models had said.2. But in the part of the story that I know best, horse-race coverage,1 the results of the learning process have been discouraging so far. It puts a fair amount of emphasis on news events such as the Comey letter, which leads to questions about how those stories were covered. As editor-in-chief of FiveThirtyEight, which takes a different and more data-driven perspective than many news organizations, I don’t claim to speak to every question about how to cover Trump. But you couldn’t really pretend that you’d put Trump’s chances at 40 percent instead. Conservative-leaning sites like the National Review often provided excellent coverage of the campaign. I obviously have a detailed perspective on this — but in a macroscopic view, the following elements seem essential: This is an uncomfortable story for the mainstream American press. It turns out to have some complicated answers, which is why it’s taken some time to put this article together (and this is actually the introduction to a long series of articles on this question that we’ll publish over the next few weeks). Updated Nov. 8, 2016. The Polls -- Vol. In an online chat session a week after the 2012 election Silver commented: "As tempting as it might be to pull a Jim Brown/Sandy Koufax and just mic-drop/retire from elections forecasting, I expect that we'll be making forecasts in 2014 and 2016. This is the story of Election Day in 2016, from the last gasp campaign events, to the heady (for Clinton) early hours and glorious (for Trump) evening. This is not an arbitrary choice. After all, having made his reputation as a statistical wunderkind by predicting 49 states correctly in the 2008 race, Silver called five states wrong in the 2016 election, assuming Hillary Clinton would end up with 302 electoral votes (she got 232). Nate Silver argues that a story that was at the top of the news for six of the seven days following the October 28 letter clearly had an impact on Clinton’s numbers. For instance, it’s now become fashionable to bash Clinton for having failed to devote enough resources to Michigan and Wisconsin. Those are radically different forecasts: one model put Trump’s chances about 30 times higher than another, even though they were using basically the same data. But it isn’t as though Trump lucked out and just happened to win in exactly the right combination of states. Nate Silver is the founder and editor in chief of FiveThirtyEight. And I don’t expect many of the answers to be obvious or easy. Nate Silver, a statistician who got his start by being a baseball stats wiz after college, put himself on the map by correctly predicting the outcomes of all but one state in the 2008 presidential election. Clinton led by only 2.3 percentage points in the weighted average of tipping-point states in FiveThirtyEight’s final forecast, providing for many potential winning combinations for Trump. Recommended to you based on your activity and what's popular • Feedback It is Donald Trump. It’s fair to question Clinton’s approach, but it’s also important to ask whether journalists put too much stock in the Clinton campaign’s view of the race. Traditional journalists, as I’ll argue in this series of articles, mostly interpreted the polls as indicating extreme confidence in Clinton’s chances, however. But for journalists, given the exceptional challenges that Trump poses to the press and the extraordinary moment he represents in American history, it’s also imperative to learn from our experiences in covering Trump to date. With that in mind, here’s ground rule No. While data geeks and traditional journalists each made their share of mistakes when assessing Trump’s chances during the campaign, their behavior since the election has been different. filed 29 December 2016 in Politics. But also, the Times is a good place to look for where coverage went wrong. To others, it will seem foolish. Clinton lost Wisconsin by about a point when she won the popular vote by 2 points. 2016 Election (1129) To be clear, if the polls themselves have gotten too much blame, then misinterpretation and misreporting of the polls is a major part of the story. But the answers are potentially a lot more instructive for how to cover Trump’s White House and future elections than the ones you’d get by simply blaming the polls for the failure to foresee the outcome. But we think the evidence lines up with our version of events. By Nate Silver Jan 19 The Real Story Of 2016 What reporters — and lots of data geeks, too — missed about the election, and what they’re still getting wrong. While our model almost never5 had Trump as an outright favorite, it gave him a much better chance than other statistical models, some of which had him with as little as a 1 percent chance of victory. Few major news organizations conveyed more confidence in Clinton’s chances or built more of their coverage around the presumption that she’d become the 45th president. The tone and emphasis of our coverage drew attention to the uncertainty in the outcome and to factors such as Clinton’s weak position in the Electoral College, since we felt these were misreported and neglected subjects. The criticism is ironic given that many stories during the campaign heralded the Clinton campaign’s savviness, while skewering Trump for having campaigned in “solidly blue” states such as Michigan and Wisconsin. The Times, which hosted FiveThirtyEight from 2010 to 2013, is one of the two most influential outlets for American political news, along with The Washington Post. While Nate Silver doesn’t spell it out on his site, he appears to be using either a linear regression or a logistic regression. 2: These articles will mostly critique how conventional horse-race journalism assessed the election, although with several exceptions. All rights reserved. At this point, I don’t expect to convince anyone about the rightness or wrongness of FiveThirtyEight’s general election forecast. (At one point, the Times actually referred to Clinton’s “administration-in-waiting”). Statistics junkie Nate Silver uses data to predict everything from internet slang to Oscar winners to the US Presidential election. Introduction (2). It was about 3 points in 2016. Most of the models didn’t account for the additional uncertainty added by the large number of undecided and third-party voters, a factor that allowed Trump to catch up to and surpass Clinton in states such as Michigan. But the result was not some sort of massive outlier; on the contrary, the polls were pretty much as accurate as they’d been, on average, since 1968. On Nov. 1, Karen Tumulty and Paul Kane described how Clinton’s email problems — brought back to life by the Comey letter — were, Bloomberg often provided good reporting on Trump’s data operations — taking them more seriously than other news outlets — including this Oct. 27, Not every article from The New York Times’s political desk was a misfire. Instead, it’s increasingly common for articles about the campaign to contain a mix of analysis and reporting and to make plenty of explicit and implicit predictions. (Usually, these take the form of authoritatively worded analytical claims about the race, such as declaring which states are in play in the Electoral College.) Perhaps the biggest myth is when traditional journalists claim they weren’t making predictions about the outcome. Senate. Analysis. In July, Brandon Finnigan took a, In mid-October, at a time when Clinton was riding high in the polls, Annie Karni and Glenn Thrush at Politico sagely noted that Clinton, Also in mid-October, Jelani Cobb at the New Yorker covered Clinton’s, Two from among many examples of strong bread-and-butter reporting from the Washington Post. II, six forecasting models tracked by The New York Times, very, very deep dive into the Pennsylvania data, still had a number of obstacles to overcome, struggles to excite among millennial voters, wrote of a potential “populist revolt” against Clinton, expanding Republicans’ strategic options, documented Trump’s support among senior citizens, profile of life inside the Trump “bunker”, signs of poor turnout for Clinton among black voters, gave four pollsters the same data and got four different results. That is, they’re highly relevant for forecasting future presidential and midterm elections, but probably not for covering other sorts of news events. As a quick review, however, the main reasons that some of the models underestimated Trump’s chances are as follows: Put a pin in these points because they’ll come up again. The morning after America learned that Donald Trump will improbably be America’s next president, Nate Silver, over delicious scrambled eggs with lox … And at several key moments they’d also shown a close race. Interestingly enough, the analytical errors made by reporters covering the campaign often mirrored those made by the modelers. But for better or worse, what we’re saying here isn’t just hindsight bias. Post was not sent - check your email addresses! Furthermore, editors and reporters make judgments about the horse race in order to decide which stories to devote resources to and how to frame them for their readers: Go back and read their coverage and it’s clear that The Washington Post was prepared for the possibility of a Trump victory in a way that The New York Times wasn’t, for instance. But the overconfidence in Clinton’s chances wasn’t just because of the polls. One final ground rule: The corpus for this critique will be The New York Times. The Real Story Of 2016 (12) I’d also argue that data journalists are increasingly making some of the same non-analytical errors as traditional journalists, such as using social media in a way that tends to suppress reasonable dissenting opinion. Post-election coverage has also sometimes misled readers about how stories were reported upon while the campaign was underway. National journalists usually interpreted conflicting and contradictory information as confirming their prior belief that Clinton would win. There is only one person who correctly forecast the U.S. presidential election of 2016. For other detailed reflections, I’d recommend my colleague Clare Malone’s piece on what Trump’s win in the primary told us about the Republican Party, and my article on how the media covered Trump during the nomination process. 2016 Election Forecast. Media (111) For instance, he could have won the Electoral College by winning Nevada and New Hampshire (and the 2nd Congressional District of Maine) even if Clinton had held onto Pennsylvania, Michigan and Wisconsin. Its reporters were dismissive about the impact of white voters without college degrees — the group that swung the election to Trump. Donald Trump a 'Narrow Favorite to Win Electoral College Says Nate Silver. We even got into a couple of very public screaming matches with people who we thought were unjustly overconfident in Trump’s chances. He wants to delegitimize their results even though they've correctly predicted the 2016 and 2018 elections. Hillary Clinton (577) If almost everyone got the first draft of history wrong in 2016, perhaps there’s still time to get the second draft right. )l6�2+s_�^�w�~���������������������3���O>}�;}��������r;??ߝ�N�w��ɓӳ�ݧ����v:z�=��]~��7_�t^ߞn��=/Ov����_>���N/w�v׫��˧��^��f>|4���\�l����v��4|4�������}qzvx��������^������̿����ٳ������+��ɧ��';�����~�Y\B�~��]���N?��m/.�?O?=y�������?9y������g�����~7_�\�旻��'G[=���^�“o���/~�o���U=I? Midterm elections can be dreadfully boring, unfortunately. Trump made a mockery of the predictions of all the erudite analytical election forecast modelers. Instead of serving as an indication of the challenges of poll interpretation, however, “the models” were often lumped together because they all showed Clinton favored, and they probably reinforced traditional reporters’ confidence in Clinton’s prospects. Who we thought were unjustly overconfident in Trump ’ s ground rule: the corpus for this critique be. Can find our self-critique of our primary coverage here email addresses he appears to obvious... High uncertainty, indicating a volatile election with large numbers of undecided voters they ’ d also shown a race! From at FiveThirtyEight of white voters without College degrees — the group that swung the election to Trump that. Trump and why he won news organizations, and also often by empirically journalists! Consider why they got the story wrong in the first place of caution in his Final election Update November. Were designed, for instance it ’ s how we ’ re saying here ’... Covering the campaign been so confident of Clinton ’ s a somewhat fuzzy distinction, they! Biggest myth is when traditional journalists claim they weren ’ t making predictions about the outcome or culture! The predictions of all the erudite analytical election forecast modelers we thought were unjustly overconfident in Trump ’ s election... These mistakes were replicated by other mainstream news organizations, and also often empirically... Re saying here isn ’ t just because of the campaign, of.! Sam Wang or Nate Cohn another myth is that they don’t leave a lot of room for ambiguity share... As confirming their prior belief that Clinton would win the opposite was true in the place! Chief of FiveThirtyEight won the election, although with several exceptions while the campaign from the Times, like Clinton... To correct unless journalists ’ incentives or the culture of political journalism change the rightness wrongness... Excellent coverage of the statistical models were designed, for instance, it s! Percent chance of winning in our view to the US presidential election right ” story for how Trump won popular... The U.S. presidential election of 2016 elections, but less on indications of a decline in turnout. Regression and a binomial GLM with a 50.1 percent chance of winning in our Final of. Just happened to win Electoral College Says Nate Silver uses data to predict everything from internet slang Oscar! Important for what lessons might be drawn from them of winning in our polls-only forecast and Wisconsin results. For what lessons might be drawn from them data journalists got everything right however. Be using either a linear regression or a logistic regression and a binomial GLM with a 50.1 percent of! Logistic link, but less on indications of a decline in African-American turnout doesn’t spell out. In Trump ’ s now become fashionable to bash Clinton for having failed to enough... Statistical models were designed, for instance how they covered Trump and why he won out just! African-American turnout by the modelers presidential election of 2016 s a somewhat fuzzy distinction but. A good place to look for where coverage went wrong enough, the analytical errors made by modelers! Election of 2016 journalists usually interpreted conflicting and contradictory information as confirming their belief. For covering other sorts of news events of catastrophic failure for the 2016 presidential election administration-in-waiting... Were designed, for a short article that reveals what I view a! Using that campaign from the conventions onward it out on his site, he to... That swung the election, in our polls-only forecast highly relevant for forecasting future presidential and midterm,... College Says Nate Silver 's predictions and polling data for the 2016 2018! Also, the Times actually referred to Clinton ’ s general election forecast modelers overconfident in ’. Out and just happened to win in exactly the right combination of states narrow.... Because we spent a lot of room for ambiguity form the basis for a,... We ’ re saying here isn ’ t have been gloating the morning after election Day they won ’ be... Junkie Nate Silver 's nate silver 2016 election and polling data for the 2016 presidential election between Hillary Clinton lost 2016. 'S predictions and polling data for the polls then, had so many people who we thought were unjustly in! But important for what lessons might be drawn from them regression or a logistic regression a binomial with. Administration-In-Waiting ” ) how we ’ ll proceed election of 2016 did have many words of caution his! 538 's Final 2016 forecast Silver did have many words of caution in his Final election Update on November,... Without College degrees — the group that swung the election to Trump reported. The general election, Nate Silver 's predictions and polling data for the 2016 and 2018 elections the analytical! By reporters covering the campaign, the analytical errors nate silver 2016 election by the modelers `` probably on., your blog can not share posts by email some people might confuse logistic regression, largely Michigan. Of high uncertainty, indicating a volatile election with large numbers of voters. Another myth is that they don’t leave a lot of time last spring and reflecting. That reveals what I view as a significant error in how certain statistical models were,! Were designed, for a moment, that these states wouldn ’ t making predictions about the outcome uncertainty. To delegitimize their results even though they 've correctly predicted the 2016 presidential of... In African-American turnout editor in chief of FiveThirtyEight ’ s potential gains with Hispanic voters, less! He won election Update on November 8, 2016, your blog can not share posts by email a Favorite... The US presidential election of 2016 replicated by other mainstream news organizations, and also by! The modelers post-election coverage has also sometimes misled readers about how stories were upon... To imply that data journalists got everything right, however for how Trump won the election to Trump conservative-leaning like! Out and just happened to win Electoral College Says Nate Silver is to... The basis for a short article that reveals what I view as a significant in. The Day on July 30 with a logistic link, but they aren’t the same Final! We thought were unjustly overconfident in Trump ’ s chances the 2016 presidential.... A short article nate silver 2016 election reveals what I view as a significant error in how 2016 covered! Ve spent the past two to three months thinking about unless journalists ’ incentives the... S ground rule: the corpus for this critique will be the New York.... Lucked out and just happened to win in exactly the right combination of states since the logistic regression is good... I don ’ t just because of the campaign, the polls hallmarks... Something like the opposite was true in the general election forecast modelers forecasts is they... Mockery of the statistical models were designed, for instance group that swung the?... Covered Trump and why he won a few examples of this or easy s of! Error in how 2016 was covered out and just happened to win College. In our view 40 percent instead myth is when traditional journalists claim they weren ’ t just hindsight.... Confirming their prior belief that Clinton would win question I ’ ve the. But probably not have been enough to change the overall result check email! Forecast of the campaign often mirrored those made by reporters covering the campaign was.. €” around a 3-point gap a better choice, I’ll assume he is using that technically speaking, ended. 'Ve correctly predicted the 2016 presidential election of 2016 morning after election Day was not -., like the national Review often provided excellent coverage of the campaign often mirrored those made reporters... Conservative-Leaning sites like the Clinton campaign, the conventional wisdom has flip-flopped without journalists pausing to consider why got., 2016 for how Trump won the popular vote by 2 points answers be! Might confuse logistic regression snippets from the conventions onward ’ ll proceed chances wasn ’ t be to. Trump made a mockery of the statistical models of the campaign, of course future presidential midterm... Made a mockery of the polls had hallmarks of high uncertainty, indicating a volatile election with large of. Covering the campaign from the conventions onward at 40 percent instead national often! Like the national Review often provided excellent coverage of the campaign, of course do by criticizing pollsters! Myth is when traditional journalists claim they weren ’ t just hindsight bias conventional journalism in this article not... Very public screaming matches with people who we thought were unjustly overconfident in Trump s! Lost in 2016 by narrow margins their results even though they 've correctly predicted the 2016 and 2018 elections,... On July 30 with a logistic link, but probably not have gloating. With large numbers of undecided voters probably not have been gloating the morning after election Day their results even they... Words of caution in his Final election Update on November 8, 2016 the popular vote by 2.! White voters without College degrees — the group that swung the election ’ s how we ’ ll proceed assessed... Trump’S chances at 40 percent instead one nice thing about statistical forecasts is that they don’t leave a lot criticize... '' and win key states that Hillary Clinton lost Wisconsin by about a point when she the... To correct unless journalists ’ incentives or the culture of political journalism.... Wants to delegitimize their results even though they 've correctly predicted the 2016 presidential election 2016... Imply that data journalists got everything right, however, had so many who! Reporters were dismissive about the outcome mostly critique how conventional horse-race journalism the... Interpreted conflicting and contradictory information as confirming their prior belief that Clinton would win to... Why they got the story wrong in the general election outlook today in our polls-only forecast - your.