A few months ago, Kip Sullivan wrote a terrific piece in which he called out the leaders of the public option community for not informing the public that the public option provisions appearing in the HR 3200 and Senate HELP bills were vastly different from Jacob Hacker’s public option proposal that PO leadership had been advocating for. Kip’s view, with which I agree, is that PO partisans should have been vigorous in clarifying the distinctions between the PO they were advocating for initially, and the quite different PO provisions they were now describing as “robust,” and were beginning to support. Kip believes that failure to make these distinctions is disingenuous, and that it has been contributing to the confusion about the public option many people have expressed in polls.

In a more recent follow-up, Kip raises the questions of whether:

” . . . (a) pollsters had allowed themselves to be fooled by the bait-and-switch campaign for the “option” and (b), to the extent that they hadn’t been fooled, what did they find out about how badly the average American had been fooled?”

Kip examined the results of 52 polls taken since mid-June of 2009 to answer these questions. In general he found that pollsters were asking people about their opinion of a Hacker-type PO, “the bait,” rather than the POs in bills actually “on the table” — “the switch.” Twenty-three of the 52 polls he examined had questions about the PO. Kip draws the following conclusions from his analysis:

”Of the 23 polls that posed a question about the “option,” only the ABC/Washington Post poll . . . could be said to be accurate, and even that is a questionable statement. To put this the other way around, at least 22 of the 23 polls I examined failed to convey accurate information about the actual “option” under consideration by Congress. It is impossible, therefore, to reach any conclusions about how the public feels about that “option.” Because 21 of the 22 polls that conveyed some information about the “option” asked questions exclusively about a version of the “option” that resembles the one Jacob Hacker originally proposed, we can only draw conclusions about that version. The one tentative conclusion we can draw is that the public appears to support the original Hacker version of the “option” – the large, Medicare-like public program. We must consider this conclusion tentative because the campaign for the “option” has been so deceptive and vague, and because the polls made no effort to undo the deception or compensate for the vagueness.”

The September Washington Post/ABC poll was different from the others. It asked two related questions about the PO. The first: “Would you support or oppose having the government create a new health insurance plan to compete with private health insurance plans?” provides no information about PO eligibility and in the context of the public debate about the PO could easily be taken as referring to a nation-wide PO plan of the Hacker-type, that any American can choose as their insurance provider. The second question, however, was: “If oppose/unsure: “What if this government-sponsored plan was available only to people who cannot get health insurance from a private insurer? In that case, would you support or oppose it?”” As Kip points out, asking this question confirms that the first question was asking about a PO plan that everyone would be eligible for, while the second one asked about restricted eligibility plans such as the kind found in HR 3200 and Senate HELP.

So, of all the polls, only one can be said to have begun to measure the support or opposition of people to the POs currently being considered in Congress, while all the rest only provide data about whether or not people support “the bait” — the Hacker type of PO. But how much information does the WaPo/ABC poll really provide about support for current bills (“the switch”)?

Shockingly, from my point of view, the second question was only asked of the 45% of respondents who did not support the unrestricted eligibility PO implied in the first question. 47% of this group, or 21% of respondents supported a restricted eligibility PO. And the survey provided no way to know how many of the 55% who support a generalized PO would also have supported a restricted eligibility PO if they had been asked the question. Since it’s quite possible, even probable, that many respondents would have opposed legislation with a restricted eligibility PO, it cannot be inferred from the data that 76% of respondents support a restricted eligibility PO of the kind being considered in Congress. Yet that’s exactly the conclusion drawn by WaPo and ABC: “Support for a public option swells to 76 percent if it were available only to people who can’t get coverage from a private insurer.”

Yesterday, Jon Walker found much the same problem in the latest WaPo/ABC poll. In that poll the first question asked was the same as in the September poll, but the second question asked above was replaced with:

“9. (IF OPPOSE/NO OPINION FOR GOVERNMENT PLAN) What if this government-sponsored plan was run by state governments and was available only to people who did not have a choice of affordable private insurance? In that case would you support or oppose this idea?”

The results were 57% support for the PO in response to the first general question, with 40% opposed and 3% undecided. The second question was again asked only of the 43% who were not in support of the general PO. The results were that 45% of the 43%, or 19% of the total, supported a PO run by State Governments.

As Jon points out, WaPo and ABC added the amount of support for the general PO to the support for the State-run PO, emerging with 76% support for the PO, the same as it found in the September’s survey, and went beyond its data in doing so, since there is no way of knowing whether all of the 57% who supported the general PO would also support the State PO. Jon also correctly points out:

”The other problem with the poll is that question 9 should have been divided into two separate parts. It combines two different ideas (state-based public plans and restricting eligibility) in a single question. It is impossible to know if both changes increase support, only one, or one makes people more supportive while the other makes them generally less supportive.”

However, he neglects to point to the error in the September survey of adding support for the General PO to support for the restricted eligibility PO, and emerging with a PO support level of 76%. So, he also neglects to conclude that the earlier survey was also in error, and that there was no basis for its conclusion that a restricted PO raised support for it from 55% to 76%.

So, this leads us back once again to Kip’s contention that most of the polling has focused on the “bait” of the general PO and not on the “switch” of the current bills. The WaPo/ABC surveys are the only ones to focus on the current bills at all, and their focus is limited to only that sub-set of their samples which expressed opposition or being undecided about the general PO.

Polling organizations such as WaPo/ABC have accepted a misleading frame for the public option story by focusing mainly on asking the public what its opinion is of an idea very similar to the original PO. It has not asked everyone in any of its samples about restricted eligibility PO plans, or about State-run plans. It has not asked anyone about single-payer, Medicare for All since June, playing into the Administration’s view that it should be taken off the table even though millions of people favor it, even though there are two current single-payer bills in Congress, and even though it would have required only an extra question in any of these surveys to ask about. What a miserable failure in serving the public this record of polling is. What accounts for it? Are the people at these polling organizations so politically biased that they deliberately slanted their supposedly scientific activities to create a frame that would be favorable to Administration proclivities? Or are they simply so immersed in the Washington insider world that they are no longer competent to create reasonably objective polling efforts that represent shades of opinion across the political spectrum on health insurance reform. I’m sure the answer is more like a combination of both bias and incompetence. But based on WaPo’s errors in adding general PO to support to restricted eligibility PO support in its surveys, I’m starting to incline a bit toward the incompetence explanation.

The errors committed by polling organizations in both framing their polls, and also in manipulating their data to draw conclusions from them, have very important implications. Just today, Jon Walker blogged about blue dog Senator Ben Nelson’s immediate attempt to use the WaPo error to support his own views, and quotes Nelson, via TPM, claiming that the WaPo poll results showed that the PO with a State opt-out/opt-in choice rose to 76% of the sample. This statement of Nelson’s is an exaggeration of WaPo/ABC’s own error since their poll said nothing about opt-out/op-in, and only talked about State-run POs. But had WaPo/ABC not made its error, then Nelson’s opportunity to exaggerate even more would not have been there. It’s the jungle telegraph effect. Once you make an error, the political jungle just magnifies it and before you know it, all sorts of stories are circulating as facts, especially when politicians have a political motive to believe them.

(Also posted at the Alllifeisproblemsolving blog where there may be more comments)