Heaven is for Real - In Theaters Now

The Hugh Hewitt Show

Listen 24/7 Live: Mon - Fri   6 - 9 PM Eastern
Call the Show 800-520-1234

National Journal polling reporter Steven Shepard has no problem with 99% Democrat sampling being accurate

Wednesday, September 26, 2012

Email to a Friend

X

(required)

(valid email required)

(required)

(valid email required)

Send

HH: Joined now by Steven Shepard. Steven is the director of polling for the National Journal. Hello, Steven, welcome to the program, thanks for being on.

SS: Hi, how are you?

HH: Great. Your article today on polling, would you summarize it for the audience?

SS: Sure. I guess there has been a lot of talk in largely conservative circles lately, as you’re aware, and I know that you’ve addressed this on your show. And you talked to some of those same people I did for my story, basically that polls are sampling, or including too many Democrats in their samples. And I spoke to a few public and media pollsters about this, and they responded basically that they’re not, there is no model to which they’re setting the party ID breakdown in their polls. They’re just reporting the results of who they’re speaking with.

HH: Now they do weight their results based upon any number of different factors, correct?

SS: Correct. They weight based on gender, to make sure that they have sort of an accurate reflection of the overall population or the overall electorate with regards to the amount of men and women. They weight according to race, they, some pollsters will weight according to other factors like age, and mostly to age, but some towards also education or income, or something that is generally measured, and is relatively static, or changes over time. It’s something that can be measured.

HH: So why do they weight those factors, but not party affiliation?

SS: Because party affiliation is an attitude. This is their argument. It’s an attitude. This is exactly what the poll is designed to measure – how Americans are feeling about politics. And frequently, for instance, if you conduct a poll about, let’s say, the presidential race, and you ask the party identification at the end of the poll, it will frequently, especially for those sort of soft independents who could move one way or the other, they’ll just end up identifying with the party of the candidate that they chose earlier in the poll. So that tends to fluctuate a lot.

HH: So if, in fact, people identify themselves at the end as being in a sample of a thousand, a thousand Democrats, would that bother you?

SS: Would it bother me?

HH: Yeah.

SS: I’m a reporter. Nothing bothers me.

HH: Would you be suspicious of the results if the poll was of a thousand Democrats?

SS: I would probably look towards other things other than party ID and wonder how you came up with a thousand people, and all thousand were Democrats. That seems…

HH: So you’d be suspicious of it?

SS: I would wonder what might cause that, but I wouldn’t, I mean, it seems like a crazy hypothetical, to be honest with you.

HH: No, it’s actually not. It’s a very legitimate one, because we have interactive polls all the time, and I discount internet polls because of self-selection. Self-selection is a phenomenon that we tend to discount as having any predictive or actually accurate reflection of the world around us, correct?

SS: That’s fair, yeah.

HH: And so you dismiss internet polls, don’t you?

SS: Oh, there are some, I mean, I will say this. I’ve also written a lot, looking long term at the polling industry and the way in which telephone polls, with the rise of cell phones and generally Americans’ greater preferences towards privacy, you’re seeing response rates for telephone polls are falling, and they’re falling very quickly. They’re down to about 9% of the calls that Pew Research Center made earlier this year in a study that they did, only 9% resulted in completed interviews.

HH: But Steve, time out, it’s not a trick question…

SS: So internet polling offers a solution to that challenge, but it’s a solution that at this time is imperfect, and it’s largely opt in, where yeah, you’re right, people are being self-selected.

HH: And so it’s not a trick question. You just tend not to report those polls, correct, because they’re not valid as indicators of any sort of predictive outcome.

SS: Well, they don’t achieve, in most cases, they don’t achieve a random sample because people are opting in. People are selecting themselves.

HH: Exactly. So you don’t trust it.

SS: I would, I’m not comfortable with blanket statements, but generally speaking, they are trusted less than a phone survey, or a survey of both land line and cell phones.

HH: Man, that surprised me. I don’t trust them at all, and I don’t see National Journal ever reporting them, and I don’t think anybody reports them in the news business, because their predictive value is zero. So what I’m trying to get at is if you have a poll of a thousand people, at what point do your antennae start to quiver? And do you worry about its predictive character? Is it 600 Democrats, 400 Republicans? What number is it, because that’s what I asked Lee Miringoff about, and he could not answer that question. I don’t know if you read the interview I did with Lee, did you?

SS: I did read the interview you did with Lee, and I spoke with Lee about it yesterday. I spoke with him for my story, and I did include the fact that he came on your show and talked about his Ohio survey a couple of weeks ago.

HH: Yeah, you said I lambasted him.

SS: I think that you’re, the interview was of a fairly confrontational nature.

HH: It wasn’t in the least.

SS: Is that an unfair characterization?

HH: No, it wasn’t in the least. I just kept asking him a question as to what point would he worry about his data set, at what point…that’s not lambasting. It’s the same question I’m giving to you. And it’s not a hard question. I’m not trying to trick anyone. I’m genuinely trying to figure out at what point do you, you report on surveys, therefore the readers of the National Journal are going to trust you to toss out those which are partisan, and we often toss out, or we look skeptically at PPP polls, for example, versus Republican outfits, because we’re afraid they’re manipulating them. And at what point do you get concerned that the sample has got no predictive ability?

SS: I think you look at, I’m more inclined to look at more stable demographics like gender and race and age to make sure that the polls are, if a poll were of, say it were 65% female, or 65% male, I would look very skeptically on that. If it were 80-85% white, for instance, nationally speaking, I would tend to think that that would favor, in the presidential race, Governor Romney. If it were overwhelmingly young people, for instance, then I would think it would tend to favor President Obama, and move the dial there a little bit more. But that’s like asking, though, if…

HH: I’m just talking about voter…

SS: If a poll showed Obama Romney leading with 60% of the voter, I mean, party ID is as much an attitude as vote preference is. And by arbitrary, pollsters argue, by arbitrarily picking a target, based on whether it’s based on the past, or whether it’s based on your current perceptions of the race, or current perceptions of the American electorate, then that’s actually the true bias.

HH: And Steve, that, I know what they argue. I find it unpersuasive as I believe Democrats would find it unpersuasive if the Republicans in the sample reflected an overwhelming bias away from a turnout model, correct?

SS: Which as I wrote in my story was the case in 2004. Democrats complained that the polls contained…

HH: And so the question is what is an accurate, what is a fair weighting of D’s versus R’s versus I’s? That’s my question, and you didn’t answer it, but I’m asking you…

SS: Because it’s not, it’s, in a way, it’s a trick question. The answer is that the proper ratio is whatever people tell you, because it’s an attitude, and that’s what the poll is designed to measure.

HH: But wait, if you…now you see, are you a statistician at all?

SS: I’m not. I’m a reporter.

HH: And so did you take any regression analysis, or anything like that as an undergrad?

SS: I was a Poli-Sci major in college, and took some of those courses, yeah.

HH: Okay, because in fact, what you just said, if you weigh some variables, you are going to necessarily weigh in additional D’s over R’s. For example, if you scale up your African-American population, you’re adding D’s. If you scale up your male, white population, you’re adding R’s. So what you said is not true, but hang on, we’ll come right back after the break.

- – - –

HH: Of course, today Gallup has President Obama three points up among all registered voters. Rasmussen has President Obama up among likely voters, tied if you push the leaners into either Republican or Democratic camps. But today, the Washington Post has a poll out that says that Barack Obama is eight points ahead of Mitt Romney in Ohio, but the poll sampled 7% more likely voters among Democrats than it did among Republicans. And in 2010, in fact, Republicans voted in greater numbers than Democrats did by 1%. However, in 2008, Democrats voted by 8% greater than Republicans. I don’t have the 2004 numbers in front of us. So the numbers shift around, Steven. The big question is, and you’re aware of who Michael Barone is, right?

SS: I am.

HH: And do you respect his work?

SS: I do. He’s a co-author of the Almanac of American Politics, which is, I have four of them sitting on my desk, going back the last four cycles. So yes, I respect his work very much.

HH: Regular guest on this show, and he said on the show last week that it has never been the fact in history that a presidential candidate has achieved greater than 3% increase in party ID in the year following the off year, and so that if the Democrats took 30% of the vote in 2010, their maximum ceiling would be 33% of the vote. I don’t know what those numbers are, but I trust Michael is correct about the 3%. If in fact he is correct, would you tend to doubt a poll that had a margin of Democrats in it much higher, much higher than the 3% higher than the latest recent midterm election?

SS: I, first of all, I would say that just because something’s never happened before, it doesn’t mean it can’t happen. I think we can agree on that.

HH: You’re right. Only Republicans might vote. It could happen.

SS: The second thing I would say is that if for a poll to achieve that, I think that there would be something else at work here, where the poll wasn’t properly weighted to match the actual demographic characteristics of the electorate. So you’d probably see something off when it comes to race, or something off when it comes to age, whether it has too many seniors, or an electorate that skews too young, or an electorate that is maybe 55% male when it probably should be 48-49% male.

HH: But don’t you have to massage, again, I’m coming back to basic polling methodology, and I’ve been doing this since 1976 as an undergrad with Gary Oren and Bill Schneider, and I know this stuff, though I must just be, what did you call me? A radio jock? Is that what you called me in your story?

SS: I believe it was radio talk show host on first mention.

HH: That the radio jock, I’m quoting here. That’s derogatory, isn’t it?

SS: I didn’t think…it was not intended to be derogative.

HH: Just questioning a little bit there, because you are, you’re a graduate of the George Washington University, right?

SS: I am indeed.

HH: 2004?

SS: That’s correct.

HH: Makes you 28?

SS: 29.

HH: 29. Okay, just checking, because I’m dealing with an expert here. So I want to know, from your expert available advice, if you’re measuring and you’re weighting some demographics, what is the argument from the science as to why you wouldn’t weight the self-described demographic of party affiliation, but you would weight all the other ones, which are also, by the way, self-described?

SS: Well, that’s true. They are self-described.

HH: Yes, it is.

SS: And first of all, let me say this one thing. I am a reporter. I’m not a pollster. I’m not defending my own work.

HH: Well now, wait a minute.

SS: I’m not the one conducting these polls.

HH: A reporter doesn’t use a term like radio jock or lambaste unless they’re attempting to skew that paragraph against the interlocutor with the pollster so that a reader would tend to prefer the pollster’s opinion over the radio jock who also teaches Con Law and has been doing this for 30 years, right?

SS: The scientific argument…

HH: Now you’re skipping my question, Steven.

SS: …is that while people…

HH: Steve, time out, an objective reporter…

SS: I’m answering your initial question.

HH: No, but an objective reporter, which you tried to put out there, wouldn’t slant a paragraph that badly, would they?

SS: I thought your interview with Lee Miringoff was confrontational in nature.

HH: And I lambasted him, and you called me a radio jock. And so is that in fact objective, fair and unbiased reporting?

SS: I believe so. Jock is a colloquial term.

HH: You betcha. Go ahead.

SS: It refers to people who work in radio.

HH: I’ll let the audience judge that. Go ahead. Now tell us more about the actual objective scientific measure at this point.

SS: The scientific measure is that party affiliation is variable. It’s dynamic. People change. And as I said, depending on where you ask it in the survey, people could identify at the start of a survey as independent, for instance, and at the conclusion of the survey as a Democrat or Republican based on the questions they’re asked, which include a horse race ballot, but also might include policy questions.

HH: Okay, that’s fine. Now…

SS: The second, while yes, people will…

HH: Hold on, we’ve got to go to break.

SS: …reporting their race or their age or their gender. Outside of some extreme circumstances, those are not going to change.

HH: Okay, I’ll be right back with Steven Shepard. He just, I think his short form was that they’re not going to lie about their gender or their race, but they might lie about their party affiliation. No proof of that, but I’ll be right back with him.

- – - –

HH: Now Steven, this is the $64,000 dollar question. And I’m just a radio jock, so I don’t have any opinion on this whatsoever. But if they might deceive us, if they’re easily influenced, if we can’t trust the number when it comes to party identification, why ask at all?

SS: Because it reflects the way Americans are feeling about politics and about the two political parties, and you know, the rise of independents, that’s been a big theme, and it’s true that people are leaving both parties, to some extent, over the past four years or so, and that’s an important measure, because it tells us something about the electorate.

HH: And so that is an independent variable that is reported of interest, because normally, if you read the story, for example, the Washington Post story today, they won’t tell you the number of Democrats versus the number of Republicans. Ditto the Wall Street Journal, ditto every Quinnipiac, ditto every Marist. They never tell that to you.

SS: Well, first of all…

HH: So if it’s of interest…

SS: …if you do read, the Washington Post published the full top line results, which included the party identification at the bottom.

HH: Yeah, they published it, but they don’t put it in the story. So…

SS: Well, and also in Chris Cillizza’s blog post, with takeaways from the poll, he did publish the party ID breakdowns at all.

HH: But in the Washington Post lead story today, they don’t. And so if it’s an independently interesting, if that’s what they’re polling for, is to find trends in Democrat and Republican, and by the way, that’s horse malarkey, it’s not why they do it, they do it to true up the poll, and Miringoff admitted that to me, Steve. He said that they’ve got 11% is a valid turnout model on which predictions can be made. And so my question is at what point will you as a reporter throw the flag on a pollster who is manipulating, because you would admit, wouldn’t you, that pollsters might manipulate a poll?

SS: I would admit that. What I would say is that the pollsters who are doing sort of this quality work, and this does include, also, I have to say, some pollsters on each partisan side, the pollsters that are doing quality work, they’re not, there is no turnout model. I think that that’s sort of a misconception, that pollsters are pinning a target and saying well, we’re going to have Democrats at this advantage point, whether it’s, you said the Washington Post poll at seven. No, they’re just talking to what they think is a demographically representative group of people, finding out which of those people says that they’re most likely to vote, for whom they’re going to vote, and then what their party identification is. There’s not adjustment made on party identification, just like there’s no adjustment made on vote preference to sort of reach for an arbitrary target.

HH: But Steve, are you willing to admit, I think you agreed with me, a pollster might shine a poll one way or the other, correct? They might bias the poll. And you just said quality pollsters, and I think you’re referring to Quinnipiac and Marist. John Zogby was for a long time a quality pollster. Do you think he’s a quality pollster, considered as much by the media today?

SS: I think we’ve covered some of the shortcomings of internet polls earlier in the conversation. I think that would stand as an answer to your question.

HH: It actually doesn’t. Do you count Zogby as a reputable organization?

SS: I would have to fully vet his portfolio, but I will say that some of his work that’s been done over the web has been inconsistent.

HH: And how about Strategic Vision and PPP? Do you have concerns over them?

SS: Well, Strategic Vision has been accused of fabricating, was accused of fabricating their results. And they never really answered that accusation positively. So that’s what I would say about them. As far as PPP goes, they use a methodology, an automated methodology that doesn’t include cell phone respondents. And right now, roughly a third of adult Americans don’t have land line phones, and so they’re automatically excluded from PPP’s polling. So that would be…

HH: So we’ve just agreed, we’ve got three pollsters – Zogby, Strategic Vision and PPP over which you have concerns over their methodology, their reputation or their past results. So why do you accept, and I’ve got nothing against Marist or Quinnipiac, nothing, I have no reason to believe they’re not trying to do their job. But why in the world would you suspend disbelief as to the importance of partisan identification simply because they told you? Do you have an independent source that party identification doesn’t matter, because I’ve got Michael Barone telling me it does.

SS: I would say that they rigorously attempt to contact a random sample of voters in the states in which they’re working.

HH: That’s what they told you. But did you verify that with anybody, that that matters? Did you go to a polling expert not affiliated with someone being paid to do the poll who’s getting an accounting receipt every month from the Wall Street Journal or CBS…

SS: I think it’s a fairly established maxim of statistics that if you’re going to conduct an opinion poll, starting with a random sample or a sample in which each member of that universe is evenly likely to be contacted, is best practice.

HH: No, I don’t think that is. And by the way, are you familiar with the Minneapolis poll, the Minnesota poll, and its problems over the years, and how deeply biased it is? Are you familiar with that?

SS: I think I’m going to need for you to explain.

HH: Well, the Minnesota poll, as illustrated by Powerline over many years, has been notoriously wrong, and has always tweaked its numbers to affect its outcome in a very bad and partisan fashion. Therefore, we’ve got four organizations now. I just want to know if you called up anybody for your story not in part of the organizations being criticized for their partisan divide to determine what a good rule of thumb is.

SS: Yes, I contacted a couple of campaign pollsters, you know, consultants who work for political campaigns, one of whom is quoted in the story, is a Republican pollster with Whit Ayres’ firm in Alexandria, Virginia.

HH: Did you call any academics, anyone, because I have no idea whether he’s good, bad or indifferent, but anyone, for example, from Harvard or George Washington, or anyone who’s a statistician to say that party identification does not matter, that you can go with random selection, because I find that astonishing. It might be true, by the way. I’m not trying to lambaste you, and I’m just a radio jock. I’m just trying to figure out I think it’s counterintuitive.

SS: Okay…

HH: I think it’s crazy.

SS: Okay, I mean look, I obviously didn’t contact, I didn’t contact any academics. I couldn’t possibly contact everyone who has an opinion on this issue. What I will say is that I offered both Doug Schwartz of Quinnipiac, and Lee Miringoff at Marist an opportunity to respond to some of these criticisms, the specific criticism that their polls are sampling too many Democrats in some cases, not in all cases, but in some cases.

HH: And did you call any of those critics, did you call any of the critics? You didn’t call me, and so that’s a given. You quoted me, but you didn’t call me. Did you call any of the critics?

SS: I did, well, I quoted you, and included, and asked about that conversation.

HH: But did you call any of the critics?

SS: I cited many of the critics, but I didn’t call them to ask them, I didn’t have any follow up questions. I was starting with the criticism as a baseline for my reporting.

- – - –

HH: Thanks to my guest as well, Steven Shepard of the National Journal for coming on and defending his story here. I appreciate that very much, but I conclude where I began, Steven. Is there some level at which you would worry that the integrity of a poll had been compromised, not necessarily intentionally, but perhaps even randomly by the answering patters of those indicating Democrat over Republican? Is there some percentage at which you would say I can’t trust that, they sampled too many Democrats, self-identifying Democrats?

SS: Well, every poll has a margin of error. The margin of error not only refers to the top line result, say, in the Obama/Romney horse race, but it also refers to, you know, it refers to every question, including the question about party affiliation. And the margin of error is applicable 19 out of 20 times. That means 5% of the time, the results are going to fall, the results, when compared to the overall universe, are going to fall outside that margin of error. And you know, that happens. But I don’t think that makes their work any less rigorous. I mean, they can defend themselves. All I did was…

HH: I didn’t say that. It wasn’t, I wasn’t asking you to defend anyone. It was just a question to you, Steve Shepard, as a journalist. Is there a point at which you look at a sample, and you say that’s too many Republicans, that’s too many Democrats self-identified, I can’t trust that. As a journalist, is there anything out there that’s a red flag in front of Steve Shepard that you would not trust the poll?

SS: I’d be more inclined to look at more stable demographics before reaching that judgment. So I’d look at race and gender, and stuff that…

HH: It’s not…so the answer is…so if it was 80-20 Democrats, it wouldn’t bother you, if it had the right number of African-Americans, Latinos, women, men, seniors? It would not bother you if it was 80% self-identified D?

SS: It would at least indicate, albeit in an exaggerated way, that there has been an important shift that we should be paying attention to. I mean…

HH: But that’s not the question. But would it bother you? Would it alarm you? Would it make you less trusting of the poll?

SS: It would be something I would report on, because I think that would be an important shift.

HH: That’s not, that wasn’t the question. The question is just would it, you, Steve Shepard, say oh, that’s not good, I’m not going to go with this one, or I’m not going to go with it strongly? Is there anything that would cause you as a reporter to say no?

SS: I think I would find it interesting, and I would write about that.

HH: But that wasn’t my question.

SS: It’s not that I would throw it out. That’s not…

HH: You know, how long have you, you’ve been a reporter for eight years. Have you ever run into a source that wouldn’t answer the simple question? I mean, would it cause you concern?

SS: I think I am answering your question. No, it wouldn’t cause me concern if there were 80% Democrat or 80% Republican.

HH: It would not cause you concern.

SS: I would write that and say there’s something going on here.

HH: I got my answer. It would not cause you concern.

SS: And that would run both ways.

HH: But I understand. But it would not cause you concern. If it was 100% Democrats, and they said Obama was ahead, it wouldn’t cause you concern?

SS: I would write that if in a random sample of voters in a given state, or across the country, if 99% were identifying themselves as Democrats, but the poll was adequately weighted according to race, according to gender, according to age, I would look at education, I would look at income. And if everything else checked out, I would say well, maybe there’s an important shift going on. Obviously, this is, you know, a big exaggeration.

HH: Steven Shepard, thank you, I appreciate the time. Come back again on a future Hugh Hewitt Show.

End of interview.

Terms of Engagement Advertisement
Advertisement
Advertisement
Invite Hugh to Speak
Advertisement
Advertisement
Back to Top