6 Surveys and Loaded Questions

A leading question is a question intended to elicit a certain response. In general, our society demonstrates an awareness of leading questions in informal, relational contexts. Characters that use leading questions to serve their personal agendas are often included in mainstream literature and film. The concept of leading questions is also observable in day-to-day life. It is easily understood that if someone asked you, “Isn’t this a good song?” you would be more likely to say that it is a good song than if that person asked you, “Do you think this is a good song?” In a personal interaction, the understanding that the phrasing of a question may lend itself to a particular response can prompt more careful consideration of the question, and in some cases, the motives of the asker. However, there is a general lack of awareness of the presence of leading questions in more formal contexts outside of the scientific community, specifically in the context of surveys. When the public looks at the results of a survey, it tends to take the results at face value without considering that the way elements of the survey were constructed may influence the results. There are several “biases”, or elements of the presentation of a question, that contribute to the effect a particular question will have on that respondent. This chapter will focus specifically on the impact of leading questions on the results of surveys.

The issue of leading questions and their effects on data accuracy is widely recognized in the scientific community. Researchers Herbert Clark and Michael Schober acknowledge the importance of the wording of a question, especially in the context of surveys, in their paper Questions About Questions: Inquiries into the Cognitive basis of Surveys. “Over the years researchers have puzzled over a number of unexpected problems with surveys. Reword a question and the answers often change” (Clark, Schober ). The Pew Research Center , a think tank based in Washington D.C., similarly emphasizes that how a question is worded can have a large impact on how a person answers it. “The choice of words and phrases in a question is critical in expressing the meaning and intent of the question to the respondent… Even small wording differences can substantially affect the answers people provide” (U.S. Survey research).

A study by Harris proves that leading questions can influence a person’s judgments of external things. The participants of the study were told that the study was designed to test their ability to correctly approximate measurements and were asked to respond to questions regarding height and length of time with their best numerical guess. After being shown an identical visual representation of a basketball player, some of the respondents were asked, “How tall was the basketball player?” and some were asked, “How short was the basketball player?” (Loftus ). On average, the participants who were asked how tall the basketball player was guessed that he was 79 inches, whereas the participants who were asked how short he was guessed that he was 69 inches. Similar questions regarding observable measurements yielded similar results. When participants were asked, “How short was the movie?” they approximated on average that it was thirty minutes shorter than the participants who were asked, “How long was the movie?” (Loftus ). How can we understand this gap in the participants’ perceptions of these measurements? In discussion of Harris’ experiment, Elizabeth Loftus points out that the questions that used the words “tall” and “long” do not suggest to the respondent that the basketball player is either tall or short, or that the movie was either long or short. However, the questions that used the words “short” suggest to the respondent that the basketball player and the movie were short. In other words, one kind of question was neutral, and the other kind is an example of a leading question. This study clearly supports that the wording of a question has an impact on the response and that leading questions may play a crucial role in the outcome of surveys that ask respondents to make a judgment or estimation regarding something outside themselves.

Another study done by Elizabeth Loftus at the University of Washington shows that leading questions can also influence our understanding of our own personal experiences. This study asked forty participants, who were all using the same headache medication, one of two versions of a question regarding products they had used to treat headaches other than the one they were currently using. The first version of the question was, “In terms of the total number of products, how many other products have you tried? One? Two? Three?” The second version of the question was, “In terms of the total number of products, how many other products have you tried? One? Five? Ten?” (Loftus ). The participants who were asked the first question (One/Two/Three) on average reported that they had used 3.3 products before the one they were currently using. The participants who were asked the second question (One/Five/Ten) reported that they had used an average of 5.2 products before the one they were currently using. In both cases, a numerical range of products was presented to the subject. Clearly, the range that was presented influenced the responses of the subject. The second part of the study was about the frequency of the participants’ headaches. Like in the first part of the study, the subjects were asked one of two versions of a question. The first version was, “Do you get headaches frequently, and if so, how often?” On average, these participants said that they experienced 2.2 headaches per week. The second version of the question was, “Do you get headaches occasionally, and if so, how often?” (Loftus ). On average, these participants said that they experienced only 0.7 headaches per week. Like the study done by Harris, Loftus’ study shows that the construction of a question influences the way a person responds to it. However, Harris’ study proved that suggestion in a leading question influences a person’s estimation of something outside himself or herself. The participants in Harris’ study did not know the exact measurements of the basketball player or the exact length of the movie, but were asked to guess. The participants in Loftus’ study, however, were asked to report the number of things or times they had actually used or experienced something, and therefore, could know an exact number. Loftus’ study not only shows that suggestion can influence a person’s response to a question regarding their own experience, but also that a leading question can cause a person to respond with factually inaccurate or incorrect information.

A similar phenomenon can be observed through another type of leading question that causes the respondent to consider only possibilities presented within the question when answering it. A study performed by students at the Universidade do Porto serves as an example of this type of question. They posed on a written survey, “Do you arrange activities with your friends, such as cinema, every week?” (Pereira, Pinto ). This question offers one example of an activity that is intended to stand for other similar activities. However, the results of the study show that the subjects responded to the question as if cinema were the only activity they could arrange with their friends. The results of this study elaborate on the results of Loftus’ study in that they show how the wording of a question can influence, and in this case limit, how a person thinks about their actions or experiences.

In certain cases, however, carefully worded questions are not effective in influencing a person’s response . A paper by Ann Bowling published in the journal of public health describes the concept of the “social desirability bias”. The social desirability bias is essentially a term that describes a survey respondent’s desire to portray him or herself in a positive way. It causes respondents to indicate that they have engaged in behaviors that are perceived by society as good and to fail to indicate that they have engaged in behaviors perceived as bad or unflattering. This bias influences the results of many kinds surveys, including surveys regarding voting. Because voting is generally viewed as something good, people often report having voted when they did not actually vote. An article by Loftus, Robert Abelson, and Anthony Greenwald outlines their failed attempts to solve the issue of social desirability bias through the use of leading questions. In the past, a “preamble” or introduction to the question (whether the respondent had voted) had been used to minimize guilt on the part of the respondent, or to suggest awareness on the part of the surveyor that a failure to vote is not always the result of negligence, but rather, is typical and can be the result of a number of circumstances. The preamble used by the NES in their vote report question is an example of a question designed to illicit an honest answer: “In talking to people about elections, we often find that a lot of people were not able to vote because they weren’t registered, they were sick, or they just didn’t have time. How about you- did you vote in the elections this November?” (Abelson, Loftus, Greenwald ). Despite the efforts of data collectors like the NES to minimize the social desirability factor, there was no decrease in over reporting as a result of the preamble, likely because the brief preamble does not override the existence of the larger social view that voting is important. Abelson, Loftus and Greenwald attempted to design a question that yields more accurate results for one given election year by giving the respondent the opportunity to report having voted for one or multiple election years past in addition to the most recent election, which ultimately proved unsuccessful. They also conducted a study where they changed the NES’s wording of the question “Did you vote in the elections last November” to “Did you miss out on voting in the elections last November?” (Abelson, Loftus, Greenwald ). The difference between these two questions is very similar to the difference between the questions regarding the frequency of the subjects’ headaches in Loftus’ individual study. The question, “Do you get headaches occasionally” suggests to the respondent that it is acceptable for him or her to get headaches occasionally, or even that it is likely, in the surveyor’s opinion, that he or she only gets headaches occasionally. The question “Did you miss out on voting in the elections last November?” should work in the same way. The wording of this question suggests to the respondent that not voting is normal and acceptable. However, there was no change in the number of people who reported that they had voted but had not actually voted. This study proves that leading questions, though powerful, are not always able to influence a person’s response if there are other biases at play, in this case the social desirability bias.

Awareness that leading questions and other biases influence the outcomes of surveys will allow the public to view the results of surveys in a more realistic way. Rather than accepting the results of surveys without question, the public should consider the fact that the results may be influenced by leading questions. In particular, the different forms that leading questions can take as outlined in this chapter, namely suggestion of a quality of an external thing and suggestion of a personal experience. The public should also take into consideration that certain biases that play a role in the results of surveys are not affected by or are independent from leading questions. The information on surveys presented in this chapter is applicable to surveys conducted in any scientific discipline and should be taken into account when viewing the results of both psychological and political surveys.

References

Loftus, E. F. (1975, May 6). Leading questions and the eyewitness report . Academic Press, Inc .

Bowling, A. (2005). Mode of questionnaire administration can have serious effects on data quality. Journal of Public Health, (27), 3rd ser., 281-291. doi:10.1093/pubmed/fdi031

Abelson, R. P., Loftus, E. F., & Greenwald, A. G. (1992). 7 Attempts to Improve the Accuracy of Self-Reports of Voting. In J. M. Tanur (Ed.),Questions about questions: Inquiries into the cognitive bases of surveys (pp. 138-153). Russel Sage Foundation. Retrieved from http://www.jstor.org/stable/10.7758/9781610445269.13

Clark, H. H., & Schober, M. F. (1992). Asking Questions and Influencing Answers. In J. M. Tanur (Ed.), Questions about questions: Inquiries into the cognitive bases of surveys (pp. 15-48). New York: Russell Sage Foundation. Retrieved from http://www.jstor.org/stable/10.7758/9781610445269.8

Prof. Dr. Altamiro Costa Pereira, Prof. Dr.ª Cristina Santos e Luís Pinto (2011) Biases in questionnaire construction: how much do they influence the answers given? Faculdade de Medicina

Questionnaire design. (2015, January 29). Retrieved from http://www.pewresearch.org/methodology/u-s-survey-research/questionnaire-design/