Are Statistics and Findings Based on Surveys Reliable?

On this forum and others, we often see articles in which “scientific evidence says” one thing or another, based on data collected from self-report surveys given to participants in research studies. In another article written for HealthGuidance.org (How to Read Articles About Health Care Research), I pointed out one of the problems with the “survey” approach – selection bias. In that example, thousands of people surveyed said that their “#1 dating turnoff” was people with “bad teeth.” It turned out that the people who were surveyed were all patients of cosmetic dentists who had just paid $5,000 to $20,000 to have their own “bad teeth” fixed. Classic selection bias. A more randomly chosen group of people wouldn’t have responded that way.

Another problem with surveys was pointed out to me by my first-year Psychology professor. He had his teaching assistants pass out a survey to each side of the large classroom, had all of us answer the questions on the surveys, and then, while his TAs were “grading the papers,” predicted what the results would be. He said that the people on the left side of the class would answer the questions one way, and that the people on the other side of the class would answer them the exact opposite way. He turned out to be 98% correct. The “trick” was that there were two different versions of the questionnaire, one worded so as to “force” one set of answers, and the other worded so as to “force” exactly the opposite set of answers, all to the same questions.

A new study shows that we can be tricked into reversing our own answers, instantly

In the study, conducted at Lund University in Sweden, cognitive scientists designed a survey that contained a built-in “magic trick.” A total of 160 volunteers completed a 12-question survey handed to them on a clipboard. When the subject completed the first page of the survey and turned the page, an ingenious set of glues would cause some of the questions to stick to the glue and appear in an altered version, but with the subjects’ original answers intact. If the original question said something like “Government surveillance of email should be prohibited,” asking for an Agree or Disagree answer, the new question now substituted the word “allowed” for “prohibited.”

The study participants were then asked to read several of the questions aloud, including the two that had been changed by the “magic trick.” Some 50% of the study participants did not notice that their answers had essentially been reversed (because the wording of the questions had been reversed). Some 69% of them fully accepted their own “new” answers, and at least 53% of them were willing to now argue for the correctness of their “new” answers. Which, of course, were now the complete opposite of the answers they’d given just moments before.

If we’re so willing to change our own answers, are surveys meaningful?

The researchers point to these findings as raising significant questions about whether self-report questionnaires and surveys are meaningful. People, they point out, hold complex ideas about things, and may not always be able to deal with that complexity with an Agree/Disagree or Yes/No answer.

The fact that subjects not only failed to notice that the questions had been altered such that their previous answers had been reversed but then argued in support of the new answers raises even more questions. Hall and his fellow researchers point out that this does not necessarily indicate that the subjects were “wishy washy” and easily swayed (although that is certainly a possibility), so much as it indicates how flexible they were in supporting things they thought they had previously said.

Many of the subjects, tricked into believing that they had answered a question positively when they originally answered it negatively, now (using a metaphor from the game of blackjack) “doubled down” on the new positive answer and defended it. As a psychologist from Boston College asked to review the study said, “These findings suggest that if I’m fooled into thinking that I endorse a view, I’ll do the work myself to come up with my own reasons for endorsing it.”

All of these issues – selection bias, questions worded to “force” certain replies, and subjects’ willingness to flip-flop on their own replies – certainly cast some doubt on the wisdom of relying on survey data as fully “scientific” or conclusive. Self-report surveys can be useful, but they are also prone to self-deception.

Yet another classic example of the latter has to do with food surveys, in which people were asked to report exactly how much they ate every day, in a clinical setting in which the researchers actually knew how much they really ate every day. Subjects tended to underreport how much they ate by 20-30%. In these cases, there were no “tricks” involved, just people’s tendency to want to believe that they ate less than they really did.



Leave a Reply

Your email address will not be published. Required fields are marked *

Juliette Siegfried, MPH

Juliette Siegfried, MPH, has been involved in health communications since 1991. Shortly after obtaining her Master of Public Health degree, she began her career at the National Institutes of Health in Bethesda, Maryland. Juliette now lives in Europe, where she launched ServingMed(.)com, a small medical writing and editing business for health professionals all over the world.

Juliette's resume, facebook: juliette.siegfriedmph, linkedin: juliettes, (+31) 683 673 767

Recommended Articles