We have tried to explain the problems with “issue polling” in this newsletter and on our podcasts before. But I’ll summarize again here. Asking people to tell you who they will vote for—and whether they will vote—is hard, but it is a concrete question about a future event that the person has the information to answer. Maybe because we can compare the responses to the actual results in a matter of weeks or months, we tend to take them with a grain of salt. And we expect pollsters to refine their methodology to improve their accuracy every cycle.
In comparison, asking people about their feelings on an issue of public policy is generally silly. First, very few issues are actually simple enough for a statistically significant number of people to know much about. Second, it’s hard—in some cases, arguably impossible—to phrase questions about policy in a non-leading fashion. Third, and most important to my mind, the sentiments supposedly being measured can resist easy categorization. In election polling, the respondent has a finite number of options, and the question is about a single concrete action: For which of these candidates, if any, will you pull the lever? But issue polling is about sentiment—and people can have contradictory and complicated feelings about an issue that are hard to capture in “agree” or “strongly agree” responses.
And yet, politicians and government leaders love issue polls. So do corporate boards, for that matter. They provide data—even if the data is crap—to back up someone’s argument, and they give a sense of direction to someone whose chief goal is to win reelection. As the French revolutionary Alexandre Auguste Ledru-Rollin may or may not have said, “There go the people. I must follow them, for I am their leader.”
I’m sure you can guess where I’m going with all of this. For years, news stories on Afghanistan have provided data to back up the idea that Americans don’t want to be there. Here are some of the questions and results from polls In April and May of this year: