When Donald Trump proposed his total ban on Muslims entering the United States earlier this week, he justified this idea, in part, by pointing to a survey commissioned by a far-right group: “[A] poll from the Center for Security Policy released data showing ‘25% of those polled agreed that violence against Americans here in the United States is justified as a part of the global jihad’ and 51% of those polled, ‘agreed that Muslims in America should have the choice of being governed according to Shariah.’” (CSP’s head, Frank Gaffney Jr., has been accused by the Southern Poverty Law Center and other groups of fearmongering about Muslims — he has memorably accused a Hillary Clinton aide of being a Muslim Brotherhood spy.)
The survey in question, conducted by the survey company inc./WomanTrend (TPC from here on out), was published in June on the CSP website. Soon thereafter, experts and commentators began picking away at what were seen as major flaws. As one Huffington Post article by Nathan Lean and Jordan Denari noted, the big, deal-breaking problem underlying the survey was that it was opt-in — rather than reach out to a random sampling of Americans, and therefore American Muslims, TPC posted the poll online and invited people to participate. From a survey-methodological standpoint, this is an immediate, nonnegotiable reason to not view this poll as representative of the broad population of American Muslims. For a variety of reasons, the population of people who choose to opt in to a poll may vary in very important ways from the wider population reached with traditional random-sampling methods. (Even if one did accept the poll’s numbers at face value, argued Georgetown’s Bridge Initiative, an anti-Islamophobia research group, a closer read tells a more complicated, less scaremongering tale than CSP’s — and reveals some loaded and oversimplified questions.)
Now, TPC never hid the fact that this was an opt-in survey. It was, however, quite vague about exactly where it got its sample from.
Here’s the Statement of Methodology that was posted on the CSP website:
STATEMENT OF METHODOLOGY
The polling company, inc./WomanTrend conducted a nationwide online study among 600 Muslim adults living in the United States (age 18+) on behalf of The Center for Security Policy on June 1-10, 2015.
The sample was drawn utilizing online opt-in panels of respondents that have previously agreed to participate in survey research. Potential survey participants were recruited using from [sic] multiple sources using a dynamic sampling platform for verification, real time profiling, and random selection based on project requirements. This multi-sourcing model increases reach and capacity, improves consistency and minimizes bias.
The methodology used for this online survey instrument is consistent with international industry standards outlined in the ESOMAR Guideline for Online Research (https://www.esomar.org/uploads/public/knowledge-and-standards/codes-and-guidelines/ESOMAR_Guideline-for-online-research.pdf).
The original survey instrument screened respondents for age, religion, gender, and region. An online sample frame was selected for this study due to the difficulty in reaching Muslim-Americans over the telephone on account of their low incidence among the nationwide U.S. population (http://pewrsr.ch/1LsdBs8/; http://bit.ly/1LsdAEA). The final questionnaire was approved by authorized representatives from The Center for Security Policy prior to fielding.
I ran this statement by Dr. Jon Krosnick, a social psychologist with expertise in public-opinion polling and attitude formation, and he said that much of the wording here is unclear in the context of an opt-in poll (he wasn’t sure what “using a dynamic sampling platform for verification, real time profiling, and random selection based on project requirements” means, for example). Moreover, he said “There is no scientific basis for [the] statement” about the accuracy of TPC’s “multi-sourcing model.”
After I forwarded Krosnick’s statements to TPC, a representative from the company who asked not to be identified by name responded with further methodological information:
o Panelists are recruited through various channels including but not limited to… email solicitation, online banner ads, word-of-mouth, phone recruitment, partnerships with thousands on [sic] online websites, partnerships with large travel/hospitality related companies. They join the panel with the intention to participate in market research.
o Panelists are compensated for their time and completing surveys. The most common method of compensation is points that can be redeemed for some other form of currency that makes sense to the panelist. Common forms of currency include PayPal credit, Amazon dollars, Membership Program Loyalty points (like frequent flyer miles), etc.
The representative also noted that “As this poll was conducted among an online group of opt-in respondents, we did not publish a margin of error or otherwise advise our client that the data were statistically representative of the entire U.S. Muslim population. In addition, Mr. Trump’s premise and policy proposal has no backing in the survey.” In other words, even the company that conducted the survey doesn’t think it was appropriate for Trump to use it the way he did.
Krosnick pointed out that these sorts of surveys can be rife with problems as a result of the incentives they entail — in some cases, respondents want to just quickly blow through as many as possible, for example, since the boost to their income depends on how many surveys they can complete. And in a report on nonrandom sampling practices, the American Association for Public Opinion Research highlights some other challenges with these sorts of surveys, including “The tendency for people to sign up on the same panel using different identities in order to increase their chances of being selected for surveys.”
It isn’t necessarily the case that TPC’s survey was susceptible to these sorts of chicanery — maybe there were safeguards in place. The fundamental problem here is the lack of transparency; whereas with an academic survey or one prepared by a “traditional” pollster there’s a detailed accounting of the methodology and sampling procedure, here, even after following up with the company, it’s unclear exactly how the survey was conducted, except that respondents were brought in from thousands of different places. Other basic questions, like how many respondents were contacted to get to a sample of 600 Muslims, are also unanswered.
Now, some survey experts think there might be a time and a place for paid opt-in polling. For example, Dr. Robert Oldendick, a public-opinion expert at the University of South Carolina, said in an email that some such polls have had good track records in recent years when it comes to predicting the results of major U.S. elections, even though they are not drawing on traditional random samples. “And while the scientific polling community remains skeptical,” he said, “more organizations (such as the CBS Poll and the Pew Research Center) have begun to incorporate these methods into their procedures.” But that still doesn’t mean a procedure like the one used for TPC should be used to try to understand Muslims in the U.S. Oldendick said that “there is no evidence that I am aware of that what works with the general population (i.e., the accuracy of these methods in general election contests) works with a very small subset of the population.”
What it comes down to is that this survey never should have been taken to indicate anything meaningful about American Muslims. And yet, because it appeared to confirm people’s worst suspicions about this group, it blew up, helped along not only by Donald Trump but, shortly after its release, by Bill O’Reilly.
In an emailed statement, the Center for Security Policy said it stands by the results of its poll and highlighted the fact that some news outlets have also used opt-in online polls. One of them, the statement pointed out, is New York Magazine — CSP linked to a Cut article headlined “The Sex Habits of 784 College Students.” Of course, nowhere does that article claim to be representative of all college students — an indication of the survey’s limitation is right there in the headline — and just a few sentences in is the caveat that “The poll was designed by journalists, not social scientists.”