Midterm Seat Loss

I’ve been shocked to hear several sources I respect get the midterm seat loss story wrong. So here is my effort to clarify.

The president’s party almost always loses House seats, but there have been 4* exceptions since 1862: 1902, 1934, 1998 & 2002. *HOWEVER in 1902 the House expanded so while Reps gained seats Dems gained more, thus Reps won a smaller percentage of seats that year. So the presidents party has lost strength in all but 3 midterms since 1862.

In the Senate the president’s party usually loses seats, but not as reliably as in the House. There have been 6 exceptions since 1960.

There is little difference, on average, in House seat losses in 1st vs 2nd midterms. An average -26.4 in 1st and -28.1 in 2nd. NO SIX YEAR ITCH! NO 1ST MIDTERM CURSE EITHER, for that matter.

2nd midterms HAVE been worse in the Senate: -2.3 in 1st, -6.0 in 2nd.

So PLEASE stop saying the president’s party only gains seats “once in the last 100 years”– you know who you are. The right answer is “three times in the last 100 years.”

And don’t imply the Senate is as predictable as the House. They aren’t the same.

And… 1st term vs 2nd? Nah. This is another rant as many people bring up “first midterm” (and in a 2nd term almost always talk about the “second midterm”) as if that mattered. It doesn’t, on average. It does vary across presidencies with some bigger losses in 1st and some in 2nd midterm.

And will 2022 be different? I don’t know. But we should get the history right.

Data details

These seat changes reflect the immediate outcome of the November election. Sometimes members die, change party or resign before the Congress is sworn in, and of course changes can occur during the Congress.

Brookings hosts Vital Statistics on Congress. Note they have a typo for 1998 indicating a loss rather than a gain. I use them here with that fix

Here is the Vital Statistics table.

Small differences if you use the Clerk of the House table, p59

Trust and Question Wording

Here comes a bit about survey question wording. For those just tuning in, NPORS=National Public Opinion Reference Survey (NPORS) from Pew, which released their 2021 update today (Sept 24) (thanks, Pew!)

According to my national @MULawPoll released this week 56% say “most people can be trusted” and 44% say “most people can’t be trusted”. But today Pew released their NPORS survey conducted this summer and find just 32% say most can be trusted. What’s going on??

This difference, of course, scared the bejeezus out of me. How can Pew’s National Public Opinion Reference Survey differ so much from mine, conducted at a similar time and on a question we would expect to be a stable attitude?? Question wording, my friends. Question wording.

My question is worded “Generally speaking, would you say that most people can be trusted, or most people can’t be trusted?” That was, in fact, the wording Pew used as recently as March 2020 and July 2020. In those 2 Pew got 58% and 53% most can be trusted, close to my 56%

So did the world go all “untrusty” since 2020? Pew changed the question in 2021. Now they asked “Which statement comes closer to your view even if neither is exactly right: Most people can be trusted or You can’t be too careful in dealing with people”

And the marginals flipped: With this wording 32% most can be trusted, 68% you can’t be too careful. A year ago in Pew’s July, with the previous wording: 58% most can be trusted, 39% most cannot be trusted. So which wording should we trust?

Pew’s original wording produced pretty consistent results (with slight differences in the stem to the question but not to response options): Nov 2018 52-47, March 2020 53-46, July 2020 58-39. So quite a change to 32-68 with the “new” wording.

But (as they say) the “new” wording is actually the one Pew generally used before the 2018-2020 polls cited above. They had generally used the “you can’t be too careful” as the alternative. And it makes a big difference.

Here are Pew studies with “can’t be too careful”: Apr 2017: 42 (trusted)-57 (can’t be too careful); Apr 2017 42-58; Feb 2016 43-56; Aug 2014 52-48(a); Aug 2014 47-51(b); Apr 2012 37-59. ( (a)Web, (b)Phone, same field dates)

This isn’t a “house” issue with Pew. The GSS has asked the “can’t be too careful” version for a while: GSS-NORC 2018 32-63; GSS-NORC 2016 31-64; GSS-NORC 2014 30-65; GSS-NORC 2012 32-64. The stability we’d expect on this item over time and close to Pew’s current 32-68.

So… both wordings appear stable and across survey houses (my 56-44, Pew’s 58-39, 53-46, 52-47) but also GSS and Pew’s flipped 32-63, 31-64, 30-65, 32-64 and 32-68.

Which wording we should use is less clear. The “most can’t be trusted” is clear and direct, “can’t be too careful” touches on suspicion. A much deeper analysis is needed of this issue. But this is a great example of seemingly similar items producing big differences.

I think there is a lot to be said for consistency, so I don’t expect to change my wording. Also this isn’t a complaint about Pew. The variation in wording they used actually allows us to understand the effect of question wording. A big help.

The Pew NPORS is a major service to the survey research world. But question wording matters and we need to take it into account, especially with a “reference survey” that influences all of us. Also the trust item was not included in the NPORS for 2020, so surprised me.

There are other issues to consider where question wording and item construction differs in the NPORS (looking at you, party ID and leaners!) so let’s all take advantage of this great resource. But as someone said: “Trust, but verify.”

Hello World!

Sixteen years ago this week a hurricane hit New Orleans and I launched PoliticalArithmetik, my first blog. This week a hurricane hit New Orleans and I’m (re)launching a website, PollsAndVotes.com.

After a year of PoliticalArithmetik, Mark Blumenthal (@mysterypollster) and I launched Pollster.com (with the support of Doug Rivers) and spent several years explaining polling and providing tracking of races, presidential approval and other topics in public opinion. In 2010 HuffPost bought Pollster and Mark had a good run with that. I departed and started PollsAndVotes.com in 2011, but have not maintained the site for a while. This is the relaunch of PollsAndVotes.com.

For some while now I’ve primarily posted analysis of polling on Twitter at @PollsAndVotes. As much as I like Twitter (most of the time) I think it is time to again have a PollsAndVotes website that allows longer posts, in one place, that can be easily found and searched for older posts, like from last week or last month. Having an editor to fix typos is also welcome.

I’ll be building out this site at a somewhat deliberate pace. I’ve decided not to import the old posts from the previous PollsAndVotes.com let alone from PoliticalArithmetik. I’ll update some of those, such as partisanship trends, but start fresh with the current data.

There will be a mix of topics here, but I’ll not be trying to replicate what Pollster.com did and what FiveThirtyEight.com and RealClearPolitics.com do well already. Most of the analysis here will be deeper dives into the national and state polling data that goes beyond trends. I also hope that my fellow academics will find graphics that may be useful in teaching.

The menu topics at the top of the page will (eventually!) provide a quick guide to analysis of “Polls” and “Votes” but also Wisconsin politics, party id, voter turnout, roll call votes and the US Supreme Court. Those first two will be something of a catch-all category. <;-)

Sixteen years ago I spent Labor Day weekend at home instead of the American Political Science Association annual meeting, keeping up with news of Katrina and launching PoliticalArithmetik. What started that weekend changed my life. I’ve still got a few days until this Labor Day weekend, and am not attending APSA, though I’ve been following the news on Ida. I hope you find the site interesting and useful.