Issue Polling… some evidence

A question from @Rufus_GB about the validity of issue polling is linked below.

Here is more of an answer than may have been wished for, largely via links to more thorough analysis than Tweets provide.

I question the value of issue polling, in general, because the results can vary so wildly based on the wording of the question. On Roe, it’s that issue plus the fact that I don’t believe many (most?) Americans understand what Roe actually does/doesn’t do. Thoughts?— Rufus (@Rufus_GB) December 7, 2021

On abortion, wording matters but don’t confound wording with substantively different aspects of the issue. If we ask about “for any reason” or about “serious defect” that is not just wording but different circumstances. Decades of work show the circumstances matter a lot.

See this review of abortion polling since 1972 by @pbump in @washingtonpost

The GSS has asked same questions since then, with 7 different circumstances. There are clear consistent differences. That is much more revealing than just “wording”. From @pbump article:

In my @MULawPoll Supreme Court surveys in Sept. & Nov. 71% of those with an opinion oppose striking down Roe, 29% favor striking it down. But 54% would uphold the 15 week ban in Dobbs, 46% oppose, again of those w an opinion.

Those are substantive differences and make sense.

Here is my analysis of that, including a look at who doesn’t have opinions on the abortion questions. Not all do, and that is also important for understanding issue polls.

The fact that people respond to issue polls differently when the questions raise different aspects of an issue seems an obvious strength of issue polling– circumstances matter, and respondents are sensitive to those circumstances.

If people responded the same way regardless of the circumstances presented in the question, we’d suspect they weren’t paying attention!

There has been a number of recent articles claiming that issue polls are “folly” or that they have seriously missed on state referenda. (And they have missed on some referenda, but the big misses are highlighted and better performance is ignored.)

The Sweep: The Folly of Issue Polling

There are important criticisms: public awareness of issues & information about the issues may be limited. Folks will give an answer but it may not mean much to them. Politicians don’t just “do what the majority wants” so policy doesn’t follow the polls very closely or quickly.

Some might say policy doesn’t mirror opinion polls, and blame polls. I’d think the elected officials might share the blame. They do respond to public opinion sometimes, but they are also responsive to interest groups and donors issue preferences. If they don’t adopt policy in line with public majorities, I’d look at those other influences for part of the story,

Issue polls often don’t have an objective “right answer.” That is what elections do for horse race polls: we know the final answer. But there isn’t a “true” measure of presidential approval or support for an issue. So how do we know?

Referenda provide a chance to measure issues

The most comprehensive analysis of referenda voting and polls was presented at AAPOR in May 2021 by @jon_m_rob @cwarshaw and @johnmsides

60 years of referenda and polling and accuracy and errros, w/o cherry picking.

See the full slide deck here.

The fit of outcomes to polls is pretty good, but there are also some systematic errors: more popular issues underperform on referenda, and more unpopular ones overperform, doing better than expected.

The fit varies across issues, but the relationship of polls to outcomes are positive in almost all issues.

Read the full set of slides. They highlight some of the criticism but provide the most comprehensive analysis of issue polls when we have an objective standard for accuracy. The results are pretty encouraging for issue polling’s relevance.

Issue polling may be criticized but those with policy interests use them. In the absence of public issue polls, the interest groups would know what they show but the public wouldn’t. That seems a good reason to have public issue polling.

There are plenty of examples. Here is one from the right, from HeritageAction.

Pew did a careful look at how much issue polls might be affected by the type of errors we see in election (horserace) polls. Probably not by very much.

Here is the Pew analysis.

Fivethirtyeight.com also looked at issue polls: Their analysis is here.

An election poll off by 6 points is a big miss & we saw a number of those in 2016 & 2020. But issues are not horseraces. If an issue poll shows a 6 point difference between pro- and anti-sides, 53-47, we’d characterize that as “closely divided opinion.” If it shows 66% in favor to 34% against, a 6 point error wouldn’t matter much. The balance of opinion would be clear regardless. Also, issue preference is not the same as intensity so good issue polling analysis needs to look at whether the issue has a demonstrable impact on other things like vote, or turnout, or if the issue dominates all others for some respondents. Plenty of issues have big majorities by low intensity or impact.

There are good reasons to be careful about interpreting issue polls. But the outright rejection of them is not grounded in empirical research. I suspect it is to deny that “my side” is ever in the minority. It is especially in the interest of interest groups to dismiss them, even as they rely on them.

Who *doesn’t* have an opinion about Roe v Wade?

Given its prominence in political and legal debate for nearly 50 years, you might think everyone has an opinion about Roe v Wade. But there is variation in opinion holding that may surprise you.

Most telephone surveys ask about Roe without offering a “Don’t know” option, though if the respondent says “I don’t know” or “I haven’t thought about it” that is recorded. Typically this produces around 7-10% who volunteer that they don’t have an opinion. See examples here:

Academics have had a long running debate over whether surveys should explicitly offer “or haven’t you thought much about this?” as part of the question. Doing so substantially increases the percent who say they haven’t thought about an issue.

Despite more “don’t knows” when offered explicitly, the balance of opinion among those with an opinion doesn’t seem to vary with or without the DK option A debate remains if people have real opinions but opt out via DK or if when pushed will give answers but w weak opinions.

Online surveys present a new challenge. There is no way to “volunteer” a don’t know except to skip the item, which very few do. So should you offer DK explicitly and get more, or not offer it and get very few without an opinion?

In my @MULawPoll national Supreme Court Surveys we ask about a variety of Court cases. But obviously most people don’t follow the Court in detail so I believe we must explicitly offer “or haven’t you heard enough about this?” Doing so produces some 25-30% w/o an opinion on most cases.

So is the “haven’t heard enough/Don’t know” rate really around 10% or really around 30%? Clearly wording makes a big difference, but I think it pretty clear those who opt for “haven’t heard enough” are less engaged on an issue than those who give an opinion.

What is worth looking at here is not the absolute level of “haven’t heard” but how it varies across the population. The invitation to say haven’t heard opens this door to seeing how opinion holding varies, and at the very least shows those more and less engaged with the issue.

Here is opinion on overturning Roe, with 30.6% saying they “haven’t heard at all” or “haven’t heard enough” about the case. Of those WITH an option, 71% would uphold and 29% would strike down.

But look at who is more likely to say they haven’t heard enough and who is more likley to say they have an opinion.

To my surprise, it is the OLD who are more likely to have an option. The young at twice as likely to say haven’t heard enough.

I wonder if the intense battles over abortion in the 1970s-80s were seared into the political makeup of folks now in their 60s and up in a way that the issue simply hasn’t been for those in younger ages. A less interesting answer is the young simply pay less attention.

Other differences are more intuitive.

Ideological moderats are much more likely to say “haven’t heard” than those towards the endpoints of ideology.

But there is interesting asymmetry here with the left more engaged than the right.

Independents are more likely to say not heard than partisans, but as with ideology the assymetry shows Democrats more likely to have an opinion than Republicans. The salience of Texas SB8 as well as Dobbs has probably boosted Dem concern generally.

There is a small difference between born again Christians and all other respondents, but perhaps a surprise that slightly more born again folks say they haven’t heard enough about Roe.

White respondents are a bit less likely to say “haven’t heard” than are other racial and ethnic group members.

And finally, what about gender?

Hardly any difference in opinion holding.

To return to the academic literature on whether to offer a don’t know/haven’t heard or not, there is good evidence that pushing people to respond produces similar results and statistical structure as we see among those who offer opinions when DK is an offered option.

The variation we see in choosing “haven’t heard” also reflects willingness to respond beyond simply not having thought. Good work shows this general reluctance is part of the issue of non-response as well.

Those with intense positions on abortion naturally assume that most people are similarly intense. The results here show we should be cautious in assuming “everyone” has an opinion on Roe (or other issues.) And the variation in opinion holding is interesting, sometimes surprising.

Here is the wording we use for this item with all the response categories.

A followup on age: Older respondents are also more likely to have an opinion on a case concerning the 2nd Amendment and the right to carry a gun outside the home. It may be that younger people pay less attention to issues before the Court in general, and so the age effect on opinion holding on Roe may not be the generational difference I suggest above, but simply variation in attention to the Court.

However, this logit model of saying “haven’t heard” includes controls for education and voter turnout in 2020, with age continuing to play a role. That doesn’t prove it is socialization behind the effect, but does show that age effects remain statistically significant even when a number of other variables are included in the model.

Abortion cases, the Court and public opinion

On Dec. 1, 2021 the US Supreme Court heard arguments on Dobbs, the case challenging Mississippi’s ban on abortions after 15 weeks, and arguments to use the case to strike down Roe v Wade’s protection of abortion rights.

Some polling here.

The @MULawPoll national Supreme Court Survey asked in September and in November about both cases. I combine the data here as opinion did not change significantly between the two surveys.

We offer respondents the option to say “haven’t heard anything” or “haven’t heard enough” and about 30% pick that for each question (30.6% missing in table are the not heard.)

For Roe, of those with an opinion, 71% say the court should uphold Roe, 29% say strike it down.

There is more support, and a close division, on whether the Court should uphold Mississippi’s 15 week ban in Dobbs. 28% lack an opinion (missing).

Of those with an opinion on Dobbs, 54% would uphold the 15 week ban and 46% would strike down the law.

Looking an the joint response, of those w/ an opinion about both cases, half, 49.6%, would uphold Roe and strike down Dobbs. 29% would overturn Roe and uphold Dobbs

But 19% want to see Roe remain in effect yet accept greater limitations on abortion rights w Dobbs 15 week ban. Less than 3% would strike down both Roe and Dobbs.

The willingness to support Roe but accept restrictions has been common in polls about abortion. A majority of respondents say either “legal in most circumstances” or “illegal in most” but not legal or illegal in all cases.

Pew national survey data from May 2021 is typical of responses to this question. About 60% are in the “most but not all” categories, with 25% legal in all cases and 13% saying illegal in all cases.

As for what structures opinions about Roe and about Dobbs in my @MULawPoll national surveys, it is ideology that has the strongest effect, with party a bit less strong.

This chart shows the estimated probability of favoring overturning Roe and of upholding Dobbs by ideology.

The green line shows that across ideology people are less likely to say Roe should be overturned while the higher purple line shows the greater probability they favor upholding Dobbs. Ideology has a strong effect on both but upholding Dobbs has more support than striking Roe.

A similar pattern holds across partisanship, though the slopes are less steep than for ideology.

The contrast between Dems vs Reps and for very liberal vs very conservative is quite sharp in both charts.

Finally, here are multivariate models for opinion on striking down Roe and for upholding Dobbs. Education plays more of a role in structuring Dobbs but not for opinion on Roe. Born again Christians are more opposed to Roe and in favor of Dobbs, as one would expect.

Roe Model:

Dobbs Model:

The effects of race and marital status vary between the two cases, while gender is not statistically significant in either model, nor is age.

Our divisions over abortion are unlikely to, shall I say will not, go away regardless of how the Court rules. How much the ruling changes the status quo, and what new political movements it sets in motion, will be a topic for next summer and beyond as the Court’s decision sinks in.

Trust and Question Wording

Here comes a bit about survey question wording. For those just tuning in, NPORS=National Public Opinion Reference Survey (NPORS) from Pew, which released their 2021 update today (Sept 24) (thanks, Pew!)

According to my national @MULawPoll released this week 56% say “most people can be trusted” and 44% say “most people can’t be trusted”. But today Pew released their NPORS survey conducted this summer and find just 32% say most can be trusted. What’s going on??

This difference, of course, scared the bejeezus out of me. How can Pew’s National Public Opinion Reference Survey differ so much from mine, conducted at a similar time and on a question we would expect to be a stable attitude?? Question wording, my friends. Question wording.

My question is worded “Generally speaking, would you say that most people can be trusted, or most people can’t be trusted?” That was, in fact, the wording Pew used as recently as March 2020 and July 2020. In those 2 Pew got 58% and 53% most can be trusted, close to my 56%

So did the world go all “untrusty” since 2020? Pew changed the question in 2021. Now they asked “Which statement comes closer to your view even if neither is exactly right: Most people can be trusted or You can’t be too careful in dealing with people”

And the marginals flipped: With this wording 32% most can be trusted, 68% you can’t be too careful. A year ago in Pew’s July, with the previous wording: 58% most can be trusted, 39% most cannot be trusted. So which wording should we trust?

Pew’s original wording produced pretty consistent results (with slight differences in the stem to the question but not to response options): Nov 2018 52-47, March 2020 53-46, July 2020 58-39. So quite a change to 32-68 with the “new” wording.

But (as they say) the “new” wording is actually the one Pew generally used before the 2018-2020 polls cited above. They had generally used the “you can’t be too careful” as the alternative. And it makes a big difference.

Here are Pew studies with “can’t be too careful”: Apr 2017: 42 (trusted)-57 (can’t be too careful); Apr 2017 42-58; Feb 2016 43-56; Aug 2014 52-48(a); Aug 2014 47-51(b); Apr 2012 37-59. ( (a)Web, (b)Phone, same field dates)

This isn’t a “house” issue with Pew. The GSS has asked the “can’t be too careful” version for a while: GSS-NORC 2018 32-63; GSS-NORC 2016 31-64; GSS-NORC 2014 30-65; GSS-NORC 2012 32-64. The stability we’d expect on this item over time and close to Pew’s current 32-68.

So… both wordings appear stable and across survey houses (my 56-44, Pew’s 58-39, 53-46, 52-47) but also GSS and Pew’s flipped 32-63, 31-64, 30-65, 32-64 and 32-68.

Which wording we should use is less clear. The “most can’t be trusted” is clear and direct, “can’t be too careful” touches on suspicion. A much deeper analysis is needed of this issue. But this is a great example of seemingly similar items producing big differences.

I think there is a lot to be said for consistency, so I don’t expect to change my wording. Also this isn’t a complaint about Pew. The variation in wording they used actually allows us to understand the effect of question wording. A big help.

The Pew NPORS is a major service to the survey research world. But question wording matters and we need to take it into account, especially with a “reference survey” that influences all of us. Also the trust item was not included in the NPORS for 2020, so surprised me.

There are other issues to consider where question wording and item construction differs in the NPORS (looking at you, party ID and leaners!) so let’s all take advantage of this great resource. But as someone said: “Trust, but verify.”