Individual change in party identification, 2016-2020

Recently I tweeted about shifts in aggregate party ID, following Gallup’s release of shifts in their polls. I find 4 other pollsters showing the D-R margin tightening, though not as much as Gallup. Blog version: pollsandvotes.com/?p=217

But what about individual change?

For tracking individual change we need panel data. Thanks to Democracy Fund and Voter Study Group we have a public panel from 2011 through 2020.

I prefer fresh cross sections for tracking aggregate shifts, but panels are THE thing for individual change.

See (or rather hear) the @FiveThirtyEight podcast of Jan 31 for a discussion that includes issues about party leaners that we’ll see in about one tweet here. How much more movable are leaners? Here strength of partisanship matters.

VSG uses the “Michigan” party id item: “Generally speaking do you think of yourself as…” followed by “strong or not so strong” or by “lean to Dem or Rep” to make a 7 point scale.

Below is the 2016-2020 panel turnover. Rows 2016, columns 2020.

Strong partisans are different from any of the 5 middle groups. Strong partisans are 90% likely to still be in same category after 4 years. The middle 5 groups are about 64% likely to be same category.

If you think party is forever, you are thinking about strong partisans.

Weak partisans (“not very strong”) either stay there (62% or 66%) or are more likely to shift into strong (19% or 20%) though 15-20% scatters toward the other party. These are pretty partisan but 1/5 weaken or shift direction. Similar for both D & R.

Leaners D (61%) & R (66%) stay leaners but some (D 12%, R 14%) jump to strong for the party. About as many (D 14%, R 13%) shift to pure independent. Add weak and about 20% become stronger partisans, 12-14 pure independent and 5%D and 2%R shift to other party

And pure independents also remain 62% unchanged. Those who shifted in this 2016-20 period were a bit more likely to shift toward R, though shifting into lean was most common (D10%, R16%)

This seems consistent w Trump attracting previously less involved voters, esp pure Inds.

Party is sticky & especially so for strong partisans. Leaners are more apt to move toward their party but a few move away. Inds also stick as much as weak & leaners, but were drawn to GOP a little more in 2016-20

There was a lot of symmetry in the movements but a slight GOP edge.

Trump, DeSantis, Pence and the GOP

My new @MULawPoll national release is out. Link at end of this post. Here I want to highlight what I think are the most informative bits. Others have tweeted the toplines for DeSantis v Biden and Trump v Biden but the goal isn’t 2024 but the GOP today.

Trump remains very popular w Republicans, 74% fav, 25% unfav. That is formidable support. DeSantis, not nearly so well known is 52%-11% w Reps but 38% haven’t heard enough. Pence at 59-31 is net positive but only 10% lack an opinion. DeSantis at 5-1 fav, Trump 3-1, Pence 2-1.

Republican support for a Trump run in 2024 is 63%, w 37% not wanting him to run. So 74% are fav to Trump, and a smaller 63% want him to run. Still a solid majority but some hesitancy about a 2024 rerun even among those favorable to him.

Trump has defined doubts about the accuracy of the 2020 vote for Republicans, and 73% say they are not confident in the election (and 71% of those who lean Rep). That issue defines Trump in the party, and the 1 in 4 Reps who disagree are quite unfavorable to Trump.

Favorability to Trump w Reps + Rep leaners is 29% among those confident in the vote, and 88% among the not confident. For Pence, there is no relationship at all between confidence & favs. DeSantis fav is more tied to confidence, but lots of “confidents” are DK for him.

Clearly Trump retains a very strong base of support w Republican voters, who also overwhelmingly adopt his claims about the 2020 election and are very favorable toward him. But support for a rematch with Biden is lower than his fav rating.

Let’s turn back to the head to heads v Biden, but look at the party crosstab. Trump gets 77% of Rep votes, 73% of leaners. DeSantis gets 81% of Reps and 75% of leaners. Trump might well win a primary battle, but GOP voters would support DeSantis at least as much as Trump v Biden.

Takeaway: Trump remains the dominate figure in the GOP, but at least one alternative, DeSantis, performs as well v Biden, gains as much support w GOP and has a better fav-to-unfav ratio in GOP as Trump (But lots of DK) It is a long time to 2024, but how GOP divides sets the stage.

Links to full survey release from @mulawpoll national survey of adults, Jan 10-21, 2022.

Full release: https://law.marquette.edu/poll/wp-content/uploads/2022/01/06_SCNationalIssuesPressRelease.pdf

Toplines, Crosstabs etc: https://law.marquette.edu/poll/2022/01/27/detailed-results-of-the-marquette-law-school-supreme-court-poll-january-10-21-2022/

Party ID Trends, Jan. 2022

So I, as many of you, woke up Monday to Gallup’s latest party ID numbers with a sharp move towards GOP. The full article is here and I strongly recommend reading it all. It is more nuanced than Twitter headlines might sound.

Let’s look at 4 other high frequency live phone polls from Marist, Quinnipiac, Kaiser, and NBC/Wall Street Journal since 2014.

First Ds and Rs w/o leaners, same scales.

Now the Dem minus Rep margin since 2014, again without leaners.

Finally, the percent who are neither Dems nor Reps. This is 100-Rep-Dem, so it includes Inds, other, dk, refused. Polls aren’t consistent in reporting these, so just the Not D and Not R seems the most consistent practice here.

As with Gallup, the Dem minus Rep margin has tightened in all 3 sets of polls here. Gallup has Ds & Rs both at 28% unleaned in both 3rd & 4th quarter, and Ds had 30-25 and 31-26 in 1st 2 qtrs. Their leaned party has Rs ahead in 4th qtr.

Leaned party is not readily available for some polling organizations, so I’ve used the unleaned which are comparable across all. Shifts among leaners are not uncommon but can clearly tilt the balance. It would be nice if all reported both unleaned and leaned every time.

The big headline is right: The balance of Ds vs Rs has shifted over 2021 to a smaller D advantage. We see this in all 4 sets of surveys.

Do note that inds+other rise and fall with the election cycle, so both parties tend to decline between elections as the non-partisans rise.

But the parties aren’t losing supporters at the same rate. In 2021 it was the Dems who lost support a bit faster than the Reps.

Bottom line is the 4 polls I’ve collected for 2014-2022, QPoll, Marist, KFF and NBC all agree the Dem-Rep margin has tightened but all still have at least a small D advantage. Trending down, so that could change but it hasn’t yet, though for Gallup it has crossed over.

Some technical details

There are two different wordings that are most often used for measuring party identification. The “Michigan” wording is from the UM Survey Research Center work used in The American Voter, a cornerstone of political science:

Generally speaking, do you usually think of yourself as a Republican, a Democrat, an independent, or what?

In contrast the long-standing wording in Gallup polls is

In politics, as of today, do you consider yourself a Republican, a Democrat or an independent?

(Modern surveys randomize the order of parties in the questions.)

Both items are often followed by a strength question (especially for Michigan wordings for partisans) and a lean question for independents (both styles do this most of the time).

Quinnipiac and NBC/WSJ use the Michigan wording and Kaiser uses the Gallup wording. I’ve not been able to find the wording used by Marist as they don’t publish the full survey instrument including demographics on their website.

In the 1980s and 1990s there was a debate in political science about whether party identification moved in response to party performance or issue positions or other “short term forces”. These debates, among other things, considered the different dynamic properties of measures from the two question wordings.

This point was raised in my Twitter thread by https://twitter.com/bcburden and https://twitter.com/drjjdyck provided a pointer to one important article that compared the dynamics of the two measures:

Abramson and Ostrom 1991 argued that the Gallup wording produced more short-term variation and should be used with extreme caution. Seems like this still holds. https://t.co/gSpsqqH2wQ— Joshua J. Dyck (@drjjdyck) January 18, 2022

I would say that research that is crucially dependent on the dynamic properties of the different measures should consider Abramson and Ostrom’s warning, though I might dissent from “extreme caution” and say “with full awareness of possible differences.” Their work was part of a debate over the responsiveness of partisanship and which measure was “really” capturing it. I’d say we aren’t too wrapped up in that issue these days. As the charts above show, both measures are showing similar trends, and for my purposes that seems the fundamental point.

Given the rise in partisan polarization it might be time to update the comparative analysis of these two wordings, but that isn’t my task today.

Issue Polling… some evidence

A question from @Rufus_GB about the validity of issue polling is linked below.

Here is more of an answer than may have been wished for, largely via links to more thorough analysis than Tweets provide.

I question the value of issue polling, in general, because the results can vary so wildly based on the wording of the question. On Roe, it’s that issue plus the fact that I don’t believe many (most?) Americans understand what Roe actually does/doesn’t do. Thoughts?— Rufus (@Rufus_GB) December 7, 2021

On abortion, wording matters but don’t confound wording with substantively different aspects of the issue. If we ask about “for any reason” or about “serious defect” that is not just wording but different circumstances. Decades of work show the circumstances matter a lot.

See this review of abortion polling since 1972 by @pbump in @washingtonpost

The GSS has asked same questions since then, with 7 different circumstances. There are clear consistent differences. That is much more revealing than just “wording”. From @pbump article:

In my @MULawPoll Supreme Court surveys in Sept. & Nov. 71% of those with an opinion oppose striking down Roe, 29% favor striking it down. But 54% would uphold the 15 week ban in Dobbs, 46% oppose, again of those w an opinion.

Those are substantive differences and make sense.

Here is my analysis of that, including a look at who doesn’t have opinions on the abortion questions. Not all do, and that is also important for understanding issue polls.

The fact that people respond to issue polls differently when the questions raise different aspects of an issue seems an obvious strength of issue polling– circumstances matter, and respondents are sensitive to those circumstances.

If people responded the same way regardless of the circumstances presented in the question, we’d suspect they weren’t paying attention!

There has been a number of recent articles claiming that issue polls are “folly” or that they have seriously missed on state referenda. (And they have missed on some referenda, but the big misses are highlighted and better performance is ignored.)

The Sweep: The Folly of Issue Polling

There are important criticisms: public awareness of issues & information about the issues may be limited. Folks will give an answer but it may not mean much to them. Politicians don’t just “do what the majority wants” so policy doesn’t follow the polls very closely or quickly.

Some might say policy doesn’t mirror opinion polls, and blame polls. I’d think the elected officials might share the blame. They do respond to public opinion sometimes, but they are also responsive to interest groups and donors issue preferences. If they don’t adopt policy in line with public majorities, I’d look at those other influences for part of the story,

Issue polls often don’t have an objective “right answer.” That is what elections do for horse race polls: we know the final answer. But there isn’t a “true” measure of presidential approval or support for an issue. So how do we know?

Referenda provide a chance to measure issues

The most comprehensive analysis of referenda voting and polls was presented at AAPOR in May 2021 by @jon_m_rob @cwarshaw and @johnmsides

60 years of referenda and polling and accuracy and errros, w/o cherry picking.

See the full slide deck here.

The fit of outcomes to polls is pretty good, but there are also some systematic errors: more popular issues underperform on referenda, and more unpopular ones overperform, doing better than expected.

The fit varies across issues, but the relationship of polls to outcomes are positive in almost all issues.

Read the full set of slides. They highlight some of the criticism but provide the most comprehensive analysis of issue polls when we have an objective standard for accuracy. The results are pretty encouraging for issue polling’s relevance.

Issue polling may be criticized but those with policy interests use them. In the absence of public issue polls, the interest groups would know what they show but the public wouldn’t. That seems a good reason to have public issue polling.

There are plenty of examples. Here is one from the right, from HeritageAction.

Pew did a careful look at how much issue polls might be affected by the type of errors we see in election (horserace) polls. Probably not by very much.

Here is the Pew analysis.

Fivethirtyeight.com also looked at issue polls: Their analysis is here.

An election poll off by 6 points is a big miss & we saw a number of those in 2016 & 2020. But issues are not horseraces. If an issue poll shows a 6 point difference between pro- and anti-sides, 53-47, we’d characterize that as “closely divided opinion.” If it shows 66% in favor to 34% against, a 6 point error wouldn’t matter much. The balance of opinion would be clear regardless. Also, issue preference is not the same as intensity so good issue polling analysis needs to look at whether the issue has a demonstrable impact on other things like vote, or turnout, or if the issue dominates all others for some respondents. Plenty of issues have big majorities by low intensity or impact.

There are good reasons to be careful about interpreting issue polls. But the outright rejection of them is not grounded in empirical research. I suspect it is to deny that “my side” is ever in the minority. It is especially in the interest of interest groups to dismiss them, even as they rely on them.

Bob Dole as seen in the polls

Sen. Bob Dole died today. Many posts and stories about his life and service to the country.

My small addition is this look at his career in the eyes of public opinion.

In 3 parts. First his full career as captured by national favorable-unfavorable polls.

Most polls were conducted surrounding his presidential run in 1994-1996.

The two early lows reflect VP nominee in 1976 where he was the voice of GOP attacks.

Note also the rise in the few polls post-1996. 2003 is the last national poll I have for him.

Here I zoom in on the 1995-1997 period when there are the most polls. I also increase the sensitivity of the trend line to pick up some of the shorter term changes.

In the densest period of polls you can see some fluctuation but pretty limited range during the 1996 campaign.

Here I zoom in again on 1996 and increase the sensitivity of the trend line a little more. The oscillation in Aug to Nov is still clear.

The net fav trend remained positive, though a fair number of individual polls were net negative.

Those not of a “certain age” may not know that Dole resigned from the Senate while seeking the Presidency on June 11, 1996. There seems to be a short term rise in net favorable at that point, but it quickly dissipates.

Bob Dole was at times highly partisan, at other times a bipartisan partner on big policy accomplishments. Some loved him and some loathed him, and many were more balanced. His life of service, and sacrifice, was admirable. I wish we had more like him.

Who *doesn’t* have an opinion about Roe v Wade?

Given its prominence in political and legal debate for nearly 50 years, you might think everyone has an opinion about Roe v Wade. But there is variation in opinion holding that may surprise you.

Most telephone surveys ask about Roe without offering a “Don’t know” option, though if the respondent says “I don’t know” or “I haven’t thought about it” that is recorded. Typically this produces around 7-10% who volunteer that they don’t have an opinion. See examples here:

Academics have had a long running debate over whether surveys should explicitly offer “or haven’t you thought much about this?” as part of the question. Doing so substantially increases the percent who say they haven’t thought about an issue.

Despite more “don’t knows” when offered explicitly, the balance of opinion among those with an opinion doesn’t seem to vary with or without the DK option A debate remains if people have real opinions but opt out via DK or if when pushed will give answers but w weak opinions.

Online surveys present a new challenge. There is no way to “volunteer” a don’t know except to skip the item, which very few do. So should you offer DK explicitly and get more, or not offer it and get very few without an opinion?

In my @MULawPoll national Supreme Court Surveys we ask about a variety of Court cases. But obviously most people don’t follow the Court in detail so I believe we must explicitly offer “or haven’t you heard enough about this?” Doing so produces some 25-30% w/o an opinion on most cases.

So is the “haven’t heard enough/Don’t know” rate really around 10% or really around 30%? Clearly wording makes a big difference, but I think it pretty clear those who opt for “haven’t heard enough” are less engaged on an issue than those who give an opinion.

What is worth looking at here is not the absolute level of “haven’t heard” but how it varies across the population. The invitation to say haven’t heard opens this door to seeing how opinion holding varies, and at the very least shows those more and less engaged with the issue.

Here is opinion on overturning Roe, with 30.6% saying they “haven’t heard at all” or “haven’t heard enough” about the case. Of those WITH an option, 71% would uphold and 29% would strike down.

But look at who is more likely to say they haven’t heard enough and who is more likley to say they have an opinion.

To my surprise, it is the OLD who are more likely to have an option. The young at twice as likely to say haven’t heard enough.

I wonder if the intense battles over abortion in the 1970s-80s were seared into the political makeup of folks now in their 60s and up in a way that the issue simply hasn’t been for those in younger ages. A less interesting answer is the young simply pay less attention.

Other differences are more intuitive.

Ideological moderats are much more likely to say “haven’t heard” than those towards the endpoints of ideology.

But there is interesting asymmetry here with the left more engaged than the right.

Independents are more likely to say not heard than partisans, but as with ideology the assymetry shows Democrats more likely to have an opinion than Republicans. The salience of Texas SB8 as well as Dobbs has probably boosted Dem concern generally.

There is a small difference between born again Christians and all other respondents, but perhaps a surprise that slightly more born again folks say they haven’t heard enough about Roe.

White respondents are a bit less likely to say “haven’t heard” than are other racial and ethnic group members.

And finally, what about gender?

Hardly any difference in opinion holding.

To return to the academic literature on whether to offer a don’t know/haven’t heard or not, there is good evidence that pushing people to respond produces similar results and statistical structure as we see among those who offer opinions when DK is an offered option.

The variation we see in choosing “haven’t heard” also reflects willingness to respond beyond simply not having thought. Good work shows this general reluctance is part of the issue of non-response as well.

Those with intense positions on abortion naturally assume that most people are similarly intense. The results here show we should be cautious in assuming “everyone” has an opinion on Roe (or other issues.) And the variation in opinion holding is interesting, sometimes surprising.

Here is the wording we use for this item with all the response categories.

A followup on age: Older respondents are also more likely to have an opinion on a case concerning the 2nd Amendment and the right to carry a gun outside the home. It may be that younger people pay less attention to issues before the Court in general, and so the age effect on opinion holding on Roe may not be the generational difference I suggest above, but simply variation in attention to the Court.

However, this logit model of saying “haven’t heard” includes controls for education and voter turnout in 2020, with age continuing to play a role. That doesn’t prove it is socialization behind the effect, but does show that age effects remain statistically significant even when a number of other variables are included in the model.

Abortion cases, the Court and public opinion

On Dec. 1, 2021 the US Supreme Court heard arguments on Dobbs, the case challenging Mississippi’s ban on abortions after 15 weeks, and arguments to use the case to strike down Roe v Wade’s protection of abortion rights.

Some polling here.

The @MULawPoll national Supreme Court Survey asked in September and in November about both cases. I combine the data here as opinion did not change significantly between the two surveys.

We offer respondents the option to say “haven’t heard anything” or “haven’t heard enough” and about 30% pick that for each question (30.6% missing in table are the not heard.)

For Roe, of those with an opinion, 71% say the court should uphold Roe, 29% say strike it down.

There is more support, and a close division, on whether the Court should uphold Mississippi’s 15 week ban in Dobbs. 28% lack an opinion (missing).

Of those with an opinion on Dobbs, 54% would uphold the 15 week ban and 46% would strike down the law.

Looking an the joint response, of those w/ an opinion about both cases, half, 49.6%, would uphold Roe and strike down Dobbs. 29% would overturn Roe and uphold Dobbs

But 19% want to see Roe remain in effect yet accept greater limitations on abortion rights w Dobbs 15 week ban. Less than 3% would strike down both Roe and Dobbs.

The willingness to support Roe but accept restrictions has been common in polls about abortion. A majority of respondents say either “legal in most circumstances” or “illegal in most” but not legal or illegal in all cases.

Pew national survey data from May 2021 is typical of responses to this question. About 60% are in the “most but not all” categories, with 25% legal in all cases and 13% saying illegal in all cases.

As for what structures opinions about Roe and about Dobbs in my @MULawPoll national surveys, it is ideology that has the strongest effect, with party a bit less strong.

This chart shows the estimated probability of favoring overturning Roe and of upholding Dobbs by ideology.

The green line shows that across ideology people are less likely to say Roe should be overturned while the higher purple line shows the greater probability they favor upholding Dobbs. Ideology has a strong effect on both but upholding Dobbs has more support than striking Roe.

A similar pattern holds across partisanship, though the slopes are less steep than for ideology.

The contrast between Dems vs Reps and for very liberal vs very conservative is quite sharp in both charts.

Finally, here are multivariate models for opinion on striking down Roe and for upholding Dobbs. Education plays more of a role in structuring Dobbs but not for opinion on Roe. Born again Christians are more opposed to Roe and in favor of Dobbs, as one would expect.

Roe Model:

Dobbs Model:

The effects of race and marital status vary between the two cases, while gender is not statistically significant in either model, nor is age.

Our divisions over abortion are unlikely to, shall I say will not, go away regardless of how the Court rules. How much the ruling changes the status quo, and what new political movements it sets in motion, will be a topic for next summer and beyond as the Court’s decision sinks in.

Parties, partisans & perceptions: liberal-conservative locations 1972-2020

I’ve seen a cartoon going around showing the liberal-conservative ideology of the self and Dem and Rep parties. In the cartoon, the self and Rep party stay fixed while the Dem party moves far to the left. It is an effective graphic & rhetoric but how does it fit with data?

The American National Election Studies (ANES) has measured ideology of self & both parties on a 7-point scale since 1972. The points are labelled “extremely liberal”, “liberal”, “slightly liberal”, “moderate”, “slightly conservative”, “conservative” and “extremely conservative”.

How have ideological self-perceptions & party perceptions changed over time? Here are the means from 1972 to 2020. A mean of 4 is “moderate”. Until 2000 both parties were a point or so away from 4, Dems a little closer to 4 than Reps. Since 2000 both have moved out from the center.

In 2020 the Dem party was just over 1.5 points to the left of 4 and the Rep party was just over 1.5 points to the right of 4. Voters self-location has hardly moved, slightly right of center. The parties remain roughly symmetric though further left or right.

But what about how partisans see themselves, their party and the other party? Here self and own party match closely, with the other party far away. Dems see themselves & the party as more moderate than does the general public, 1 point or less to the left.

Dems used to see themselves as quite moderate, less than half a point to the left but have drifted left since 1998 so they now see themselves as 1 point to the left.

Dems also see the Rep party about where the general public sees it, just over 1.5 points to the right of center. How do Republicans see themselves & the parties? Next tweet please.

Republicans see themselves & their party as close together, and again about 1.5 points to the right, as does the general public. But they see the Dem party as considerably further to the left than does the general public, over 2 points to the left of center.

Reps see themselves & their party drifting right from 1 point right of center in 1992 to just over 1.5 to the right now.

Dems think themselves more moderate & their party more moderate than the general public does. They don’t push Rep party to the right, however. Reps put themselves & their party about where the public sees the GOP but perceive the Dem party much further to left than general public.

How about independents? they put themselves very close to the moderate center, and perceive each party about where the general public does, and roughly symmetrically, each party now about 1.5 points to left or right of center.

So what about that cartoon that’s been going around Twitter? It doesn’t reflect how the general public perceives the parties over time. The public sees both parties moving a bit out from the center over the last 20 years, but equally so.

But the cartoon does reflect the perceptions of Republicans of the Dem party pushing it well to the left of where the general public, or independents, sees it.

Data from American National Election Studies (ANES) 1952-2016 cumulative data file and ANES 2020 survey. There have been mode changes in recent years, with 2020 primarily conducted by web, with a mix in 2016. I’ve ignored these issues in the analysis here. Those failing to place themselves or the parties on the liberal conservative scale are set to missing and excluded from the analysis.

ANES website: https://electionstudies.org