Sie sind auf Seite 1von 38

Artificial Intelligence: Comparing Survey Responses for Online and Offline Samples* Craig M.

Burnett Assistant Professor Department of Government and Justice Studies Appalachian State University burnettcm@appstate.edu Abstract: Over the past decade, an increasing amount of scholars and professionals have turned to the Internet to conduct research ranging from public opinion surveys to complicated psychology and economics experiments. While there has been a focus on whether online samples are reliable and accurate, less research examines the behavioral differences between online and offline respondents. This paper focuses on one such potential difference. I use an experiment to gauge whether online respondents augment their knowledge with outside research. In the experiment, I ask a sample of online and offline subjects to answer a battery of factual knowledge questions. The results show that voters in the online treatment group are significantly more likely to answer questions correctly a relationship that strengthens when question difficulty increases. These findings suggest that some but not all respondents completing surveys online are supplementing their answers with research.

I owe a special thanks to Michael Best, Joseph Petrack, and Greg Travis for their excellent research assistance. I am also indebted to the Department of Government and Justice Studies at Appalachian State University and Mat McCubbins for funding this research. I thank Todd Hartman and Vladimir Kogan for helpful comments on previous drafts. All errors remain my own.

Over the last six months, responding to invitations from various research firms, I have sounded off dozens of times about casino resorts, business travel, automobiles, bakery products, commercials, snack foods, movies, toothbrushes, winter sports and coffee beans. Occasionally I felt so ill informed about a topic that I cribbed an answer, pausing to do a Google search, for example, to help on a couple of sports surveys even though nothing but my ego was riding on an informed response. Nancy Beth Jackson, Opinions to Spare? Click Here. The New York Times A substantial amount of academic, commercial, and government research has made use of the Internet as a source for recruiting subjects for surveys and experiments. Large polling firms such as YouGov/Polimetrix and Knowledge Networks and even the American National Election Study now use online samples in whole or in part. The Internet has not only dramatically reduced the cost of research but also helped overcome growing challenges in reaching respondents with traditional methods due to the increased use of cellular phones and lower response rates (e.g., Curtin, Presser, and Singer 2005; Holbrook, Krosnick, and Pfent 2007). There is little reason to doubt that the use of online samples in research on public opinion, economics, and psychology will continue to grow at a fast pace. Understanding both the strengths and weaknesses of various modes used to gather survey responses is critical for scholars and practitioners who use the Internet to assemble samples of respondents. There remains some disagreement about whether online respondents even after reweighting can form a representative sample from the target population. In a broad look at multiple surveys, Yeager et al. (2011) demonstrate that online non-probability samples, even after post-stratification, are not as accurate as probability samples. Ansolabehere and Schaffner (2011), however, argue that careful construction of an online opt-in sample with proper matching to Census demographics produces results that are just as reliable and accurate when compared to phone and mail surveys (see also, AAPOR 2009; Hill et al. 2007).

By contrast, we know less about the behavioral differences between online and offline respondents. There are good reasons to expect that online respondents will approach surveys differently. First, unlike telephone and in-person interviews, online subjects do not directly interact with an interviewer. Without an interviewer pressing the respondent for an answer, online interviewees have the luxury of time. Additional time, as Prior and Lupia (2008) show using an experiment, can lead to better answers. They are also more likely to report socially undesirable behavior or traits online (e.g., Chang and Krosnick 2009). Second, and what I focus on in this research, because online respondents must use the Internet to register their choices on a survey or in an experiment, an unfathomable bounty of knowledge is literally at their fingertips. Put another way, online surveys and experiments ask respondents to provide answers in an environment where they are accustomed to researching questions and multitasking. As almost every online survey and experiment requires the respondent to use a web browser to access the questionnaire or experimental instrument, there is no guarantee that respondents will not engage in other activities while completing the survey. In particular, with the luxury of time and resources at their disposal, online respondents may choose to research some of the questions and topics that appear on the survey or experiment. If true, conducting surveys and experiments online introduces a new and important testing threat to validity.1 In this paper, I report results from an experiment that examines the difference between online and offline responses to factual knowledge questions. In the experiment, I randomly assign respondents to complete the same survey either online or in the laboratory. I provide no incentive to respondents to supply a correct answer to any question; the experiment only asks respondents to select the best possible answer to every question. I find that the online
1

It is possible, as was the case for Prior and Lupia (2008), that allowing individuals to supplement their responses with additional information is beneficial. It is up to the individual researcher to decide when and how much the behavioral discrepancy I identify is an important threat to validity.

respondents are able to answer significantly more questions correctly because, as inquisitive beings, some though not all online respondents appear to have researched some of the answers. My results demonstrate that online surveys and experiments must take into account the fact that some respondents will augment their answers by researching some of the topics and questions they confront, thereby artificially increasing their knowledge. If some respondents are willing to research factual knowledge questions when there is no incentive to do so, we can also expect that some online respondents will supplement their answers to opinion questions such as candidate preference, the economy, and foreign affairs. Likewise, many respondents are also likely to research the nature of the experiment for which they are a participant. This additional knowledge may alter their behavior, spoiling their responses. The Debate over Online and Offline Samples Academics and practitioners alike are concerned about the representativeness and accuracy of online samples, with good reason: the online community does not mirror the population. A recent survey by the Pew Internet and American Life Project2 confirms that online users are different. Their survey shows that there is an inverse relationship between age and Internet access, and there is a positive relationship between Internet access and both income and education. For those interested in politics, these disparities are especially troubling as all three demographics are predictors of a wide range of political activities, including racial attitudes and turnout on Election Day (e.g., Delli Carpini and Keeter 1996; see also Kenski and Stroud 2006).3 The most important factor that attracts researchers to the Internet to gather samples of respondents is the relatively low cost. Online nonprobability samples in particular similar to

2 3

The 2011 surveys demographics are available here: http://pewinternet.org/Trend-Data/Whos-Online.aspx Online survey respondents demographics mimic the demographics of the online population as they are younger, better educated, and higher on the social-economic scale when compared to the population as a whole (Alvarez, Sherman, and VanBeselare 2003; Berrens et al. 2003).

the innovation of random digit dialing before it (e.g., Groves and Khan 1979) have become an attractive method for survey research because the costs associated with it is a fraction of other modes. Internet surveys also offer a degree of flexibility in presentation that is not attainable in person or over the phone. 4 Indeed, Coupers (2000, p 465) astute observation that online research would vastly increase the types of individuals and groups that can conduct survey research has become more accurate with the passage of time. As researchers continue to turn toward the Internet to help answer empirical questions, debates over the quality of online data have gained growing attention. This debate centers on the method of recruiting subjects to participate in online samples specifically, whether the subjects are selected via probability sampling or respondents opt-in to the sample (for a recent review see Yeager et al. 2011). There seems to be a tacit consensus among researchers that probability sampling of online subjects produces accurate and meaningful measurements of public opinion (see, e.g., Couper 2000). Nonprobability online samples, however, remain an intense topic of discussion and research. The concern with these samples is that because individuals choose to opt-in to the survey, the sample is unlikely to be representative of the greater online population and even less representative of the target population. Malhotra and Krosnick (2007) note that online nonprobability samples produce results that are significantly different than probability face-to-face interviews (i.e., American National Election Study). In a recent comparison of random digit dialing telephone surveys and online nonprobability samples, Yeager et al. (2011) find that, again, nonprobability online samples produce variable results even after post-stratification, which calls into question the accuracy of research that uses these types of samples.

There are, of course, additional strengths and weaknesses to conducting research online. For a recent review, see Chang and Krosnick (2009; see also Couper 2000).

Other researchers challenge the notion that nonprobability samples produce inaccurate results. Hill et al. (2007) find that the demographics of the 2006 Cooperative Congressional Election Study conducted by YouGov/Polimetrix, which employs a post-stratification matching algorithm to reweight responses to correspond with the target population were very similar to the American National Election Study. By way of conclusion, Hill et al. argue that a large-n nonprobability sample will produce more accurate results than a small-n in-person survey. In a similar study, Ansolabehere and Schaffner (2011) analyze and compare the difference in responses between an Internet opt-in nonprobability sample to both a random digit dialing sample and mail survey sample. They show that on a variety of metrics, including political knowledge, the nonprobability sample is statistically indistinguishable from other survey modes. While the debate over the representativeness and accuracy of online nonprobability samples will continue for the foreseeable future, the number of studies using nonprobability samples has proliferated. Time-sharing Experiments for the Social Sciences (TESS), the Cooperative Congressional Election Study (CCES), and now even the American National Election Study use either in whole or in part Knowledge Networks or YouGov/Polimetrix to gather survey samples. The amount of research that use these three data collection efforts is substantial. A search of the American Political Science Review, American Journal of Political Science, and Journal of Politics from 2006 to 2010 by Ansolabehere and Schaffner (2011) found that 33 of the articles published used either Knowledge Networks or YouGov/Polimetrix data. Expanding a search beyond the top general journals in political science would undoubtedly confirm Ansolabehere and Schaffners finding that usage of the sampling technique, and online data in general, is widespread.

In recent years, researchers have begun to use Amazons Mechanical Turk (MTurk) to collect survey samples. In what Amazon describes as a marketplace for work, for survey researchers MTurk represents the opportunity to hire an ad hoc nonprobability sample of respondents who complete a survey for a price.5 In a recent analysis of MTurks accuracy and quality, Berinsky, Huber, and Lenz (forthcoming) find that there is already widespread use of MTurk to gather respondents among social scientists (see also Burmester, Kwang, and Gosling 2011). With regard to MTurks reliability in social science research, Berinsky, Huber, and Lenz replicate a number of experimental studies to show that MTurk samples are actually more representative when compared with student samples or other convenience samples. These results suggest that, at least for experimental work, MTurk data are quite reliable. In a related study, Cassese et al. (2011) find that researchers can use social media outlets to gather samples that are as reliable as other nonprobability samples (including MTurk) for experimental work. To date, however, no study examines whether MTurk or social media sites are appropriate and reliable for gathering public opinion data or conducting non-experimental work. Given the low cost, however, it is likely that MTurk and social media websites will continue to be an important outlet to gather subjects for both experimental and observational work. Unlike the overwhelming majority of existing research, this article focuses on an equally important but understudied problem with online samples: respondent behavior. The most identifiable behavioral difference is that online respondents are more likely to report socially undesirable behavior and beliefs (e.g., Chang and Krosnick 2009; Duffy et al. 2005; Holbrook, Green, and Krosnick 2003; see also Himmelfarb and Lickteig 1982; Paulhus 1984). That is, respondents online are more willing to provide honest answers to sensitive queries because the

The amount of the reward varies. On February 1, 2012, for example, the compensation available ranged from $17.50 to $0.00 for each task.

nonverbal and verbal cues associated with giving answers to a live survey researcher are absent (Drolet and Morris 2000; Holbrook, Green, and Krosnick 2003; Krosnick 1991; for a longer discussion and review, see Chang and Krosnick 2010). In other words, online respondents feel less shame or embarrassment reporting their behavior and beliefs. While interviewers especially during face-to-face interviews can establish trust with the subject, thereby minimizing satisficing (e.g., Drolet and Morris 2000; Holbrook, Green, and Krosnick 2003), online surveys ability to encourage individuals to report socially unacceptable beliefs and behavior are net positives for many researchers. Unmonitored computer assisted survey interviews (CASI), however, may lead to respondents augmenting their responses with information from outside sources. This is a potential threat to validity for both experimental and observational studies that use online samples. Existing research highlights the potential problem: When there is an incentive to do so (e.g., earn a higher grade or more money), individuals are likely to use the Internet to aid a variety of activities ranging from term papers (Barberio 2004) to answering political knowledge questions (Prior and Lupia 2008). Clearly, when there is an incentive to improve the accuracy and quality of responses, being able to research answers to questions or the nature of an experiment (e.g., the dictator game) poses a troubling threat to validity for online research. What about research that does not pay per answer but per completed task (e.g., MTurk, Knowledge Networks) and surveys that recruit a sample by paying advertising fees instead of using a monetary incentive (e.g., using Facebook to find subjects)? On one hand, there is no economic incentive for subjects to supplement their answers with outside sources. As such, one could assume that respondents will simply complete the task without much thought since there is no incentive to do better. In fact, satisficing the act of doing the minimum to complete a

survey appears to be higher for online respondents (Duffy et al. 2005; Heerwegh and Loosveldt 2008; but see Chang and Krosnick 2010), suggesting this might be true. On the other hand, online respondents are not automatons lacking an ego. Most survey respondents desire to perform well when participating in a research project (Chambers and Wolf 1996; Sharp and Frankel 1983). Further, Holbrook, Green, and Krosnick (2003) note that telephone respondents appear to multitask when completing surveys, implying that online respondents are likely doing the same. Combined, these studies provide strong reasons to expect that respondents may have wandering mice and keyboards when completing surveys and experiments online, regardless of the incentive structure. Some existing research suggests that online respondents are augmenting their responses. Prior and Lupia (2008) find that their experiments subjects, when given more time, are able to answer more political knowledge questions correctly when compared with the baseline online group that did not receive more time. Their subjects perform even better when they have more time and an economic incentive to answer questions correctly. Fricker et al. (2005), Ansolabehere and Schaffner (2011), and Strabac and Aalberg (2010) also note that online respondents tend to score higher on political knowledge questions when compared to phone respondents. Both sets of researchers, however, suggest that the differences are negligible. Likewise, Duffy et al. (2005) find that online respondents are more knowledgeable about cholesterol when compared with face-to-face respondents, which leads them to wonder whether online respondents are searching for answers. Again, they conclude that online respondents are simply more informed, as demonstrated by the fact that face-to-face respondents who had Internet access were better informed than were face-to-face respondents without Internet access, a finding that Ansolabehere and Schaffner (2011) confirm. Overall, there are hints that online

respondents are supplementing their responses with knowledge from the Internet, but no study shows the extent to which this may (or may not) be a problem. I turn now to consider this unexplored behavioral component of online respondents. Hypothesis, Data, and Research Design Respondent behavior when answering survey questions and participating in experiments online remains a relatively unexamined line of research. Understanding how online respondents approach the questions that surveys and experiments ask of them is important for researchers and practitioners alike. If, for example, researchers wish to conduct an online survey assessing opinions about the recent uprisings in the Middle East and North Africa, what kind of opinions can they hope to measure? Will respondents provide a simple gut reaction? Or, will some respondents, upon reading the questions, choose to supplement their answers by researching the topic? For online surveys, researching answers or facts related to a survey question may be simply too strong of a temptation for the more curious among us. Two factors make it likely that some respondents will research their answers. First, online surveys and experiments ask individuals to provide responses in an environment where they are accustomed to researching answers, products, and news stories. Some survey participants may see the question as an opportunity to learn about something they did not know much about. Second, many online surveys provide respondents with days, and often weeks, to complete their survey,6 a luxury that is not available to surveys and experiments conducted using other research modes. Furthermore, unless the survey is administering a time-sensitive experiment (e.g., an Implicit Association Test, or IAT), most online surveys allow respondents to answer questions at their leisure. By contrast,

There are, of course, ways to limit the amount of time available for respondents to complete tasks. Mechanical Turk, for example, allows the researcher to place a strict time limit for responses. Additionally, much of the survey software available can track the amount of time each response takes, allowing the researcher to discard as many responses as she deems necessary.

10

surveys and experiments with employed interviewers and proctors press respondents for answers in a timely manner. As mentioned above, Prior and Lupia (2008) show additional time often leads to more informed answers. For these reasons, the first hypothesis I test is:

H1: On average, online respondents will score higher on factual knowledge questions when compared with offline respondents.

To test this hypothesis, I conducted an experiment where respondents were randomly assigned into an online and offline group. All subjects were recruited from lower-division courses in political science and criminal justice at a large public university. Subjects received extra credit for their participation in the experiment. To ensure random assignment, I gathered a list of interested subjects from all participating classes. I then numbered the potential participants sequentially, beginning at 1. Then, odd numbered subjects were placed into the treatment group (n=196) and even numbered subjects became the control group (n=171). The online (treatment) group received an e-mail invitation asking them to complete the survey online within a two week time period (September 6-21, 2011). The offline (control group) received an e-mail providing them instructions on when and where they could complete an in-class survey during the same two-week span.7 Instead of registering responses on the computer, the control group was asked to complete an identical paper-based survey. Graduate assistants were at the survey location to check respondents into the experiment and to monitor the subjects so they were not able to access the Internet via a cellular phone or computer. The only interaction

Some readers may argue that the ideal experiment setup would be to have both groups arrive at the same location and then be randomly assigned to complete the experiment on a computer or in the classroom. In practice, however, this is less desirable since it increases the likelihood that the control group would be aware that the treatment group was allowed to complete the task on a computer.

11

respondents had with the graduate students was being checked into and out of the experiment and having an assistant monitor them while they completed their survey.8 That is, the graduate assistants did not encourage the subjects to complete their responses within any timeframe, and the assistants did not register the responses for the offline participants.9 In the experiment, I asked respondents to answer a battery of factual knowledge questions that ranged from very easy to very difficult. Appendix A lists each question I asked, the difficulty rating, and the percent of overall correct responses. Respondents did not have any monetary incentive to provide correct answers, and their eligibility to receive extra credit for their participation was not conditional on them providing correct answers. The topics I asked about were sports (10 questions), popular culture (10 questions), rules of the road (5 questions), economics (5 questions), geography (5 questions), consumer knowledge (10 questions), and American politics (10 questions). To test whether respondents seem to be researching answers, I compare the average number of correct responses for online respondents to the average number of correct responses for offline respondents for all of the subjects on the survey. I also compare the average total number of correct responses for all subjects across both groups. As a check on robustness, and to ensure that the causal effect is in fact online respondents researching answers, I calculate the percent of correct responses to difficult questions for both the treatment and control groups. I define a difficult question as having a rating of 4 or 5 in Appendix A. As an additional test, I included three exceedingly difficult questions on the survey (noted with an * in Appendix A). These questions are so obscure that, for respondents in their

It is worth noting that online respondents also had to check in by registering their name and digitally signing the consent agreement. That is, both groups required respondents to register their identity before completing the survey. In accordance with human protections procedures, both sets of responses were divorced from any identifying information after the survey was completed. 9 Furthermore, due to the flexible scheduling of the experiment, most subjects in the control group completed the survey with one or two other survey takers in the room at the same time.

12

late teens and early twenties, I expect an overwhelming share of the respondents to not know the answer. This makes it likely that most respondents who gave a correct answer to these questions provided a lucky guess, had intimate knowledge of the topic, or researched the correct response. If online respondents were significantly more likely to register a correct response on these questions, it stands to reason that most of these responses stem from respondents augmenting their knowledge of the subject.

H2: As question difficulty increases, online respondents will score better than offline respondents.

Results My results unfold in three steps. First, I provide some descriptive statistics of my sample and examine whether I achieved random assignment for my two groups. Second, I compare the average level of knowledge for each category and the overall percentage of correct responses for both the treatment and control group. Finally, I test whether the degree of question difficulty mattered. That is, when the difficulty of the questions increased sometimes to exceptionally difficult levels were online respondents the more likely to search for the answer? For an experiment to establish a causal effect, the treatment and control groups must have covariate balance. Table 1 below presents the comparisons between the treatment and control groups for a variety of demographic variables. The comparisons show that random assignment produced two groups that are very well balanced with a few very minor discrepancies. In particular, the treatment group is slightly less likely to self-identify as a Democrat in favor of Independent, and the treatment group is slightly older. These differences, however, are almost

13

negligible, indicating that I achieved excellent covariate balance between the two groups. Thus, the two groups are comparable despite their different response rates. 10 Table 1 Covariate Comparison Between Treatment and Control Groups Control Group Treatment Group Democrat Republican Independent White African-American Female Age (Mean) Math SAT (Mean) Reading SAT (Mean) Writing SAT (Mean) Number of Observations 21.6% 45.6% 24.6% 86% 3.5% 50.9% 18.7 years 568 570 541 171 18.9% 44.4% 28.1% 90.3% 3.6% 50% 20 years 562 567 554 196

I turn now to analyze the comparisons between the treatment and control group with regard to their knowledge of the questions I asked. To accomplish this, I calculate the average percent of correct responses for each category of questions for both the treatment and control groups. I also present the overall percent of correct answers and test for significance using a two-group difference in means test. The results, which appear in Figure 1 below, provide strong evidence that supports my first hypothesis. In fact, respondents in the treatment group (those who completed the survey online) provided significantly more correct answers in every category with the exception of political knowledge when compared with the control group (those who completed the survey in the laboratory). The treatment group scored 3.3 percentage points
10

A total of 450 undergraduates signed up to participate in the experiment, with 225 respondents assigned to each group. For the treatment group, the cooperation rate was 87.1%. For the control group, the cooperation rate was 76%. While the different cooperation rates indicate that non-response was worse in the control group, a simple logit (available from the author) indicates that common covariates between the groups (party identification, ideology, SAT scores, gender, year in college, parents income, and age) are not significant predictors of group assignment. This logit regression suggests that the two groups are equivalent.

14

higher than the control group on political knowledge, but the difference is not significant, perhaps because the sample consisted of students enrolled in lower division political science and criminal justice courses.11 For the remaining categories sports, popular culture, rules of the road, economics, geography, and consumer products the treatment group was significantly more likely to provide a correct response. This difference ranges from 5.4 percentage points (consumer products) to 9.9 percentage points higher (sports). Overall, the treatment groups overall score was 7.2 percentage points higher when compared with the control group.12 Given that the treatment and control groups appear to be of equal intelligence as indicated by their SAT scores in Table 1, I can be confident that the treatment is responsible for the disparity between the treatment and control groups. In other words, the treatment group appeared to be researching the correct answers to some of the questions I asked, artificially inflating their knowledge scores.

11

While I do not have a strong theory for why politics is the only category that is not significantly different, it is possible that respondents thought the studys purpose was to measure political knowledge because the majority of subjects were enrolled in political science courses. Perhaps, then, fewer online respondents looked up answers because they did not want to spoil the study. Another possibility is that, again, because many were taking a political science course, respondents felt better prepared to answer questions about politics due to the excellent education that my department provides. The experiment, however, occurred during the beginning of the semester to minimize any potential history threat. 12 To calculate the significance of the difference in responses between the two groups, I use a two-tailed two group difference in means test.

15

Figure 1 Comparison of Offline and Online Samples Regarding Factual Knowledge


100% 90% 80% 70% 60% 50% 40% 30% 20% 10% 0% Political Knowledge Sports Popular Culture Rules of the Economics Geography Consumer Road Products Overall 63.8% 60.5% 64.3% 59.0% 54.4% 49.8% 73.1% 65.7% 65.9% 57.5% 53.0% 45.7% 53.0% 47.5% 61.1% 54.0%

Offline Sample

Online Sample

My second hypothesis states that individuals in the treatment group will score significantly higher on difficult questions if respondents are researching answers. To test this hypothesis on a macro level, I calculated the percent of correct responses for both the online (treatment) and offline (control) groups for questions with a rating of 4 and 5 (listed in Appendix A) for all categories. I constructed the difficulty ratings based on a sample of ten test subjects who answered the questions before the experiment launched. In all, I rated 22 out of 55 questions as a 4 or 5. I present the percentage of correct answers to these questions in Table 2 below. On average, the control group answered 21.1 percent of these questions (about 4.6 questions) correctly. By contrast, the average respondent in the treatment group answered 31.4 percent of the difficult questions (about 6.9 questions) correctly. The difference of 10.3

16

percentage points is significant beyond the 99.9 percent confidence interval with a two-group difference in means test. These results indicate that, when question difficulty increases, the online group is significantly more likely to register a correct answer. Table 2 Percentage of Correct Responses to Factual Knowledge Questions Control Group Treatment Group Difference All Difficult Questions (4 or 5 Rating) Katharine Hepburn 21.1% 8.2% 31.4% 16.3% 10.3%*** 8.1%*

Piccadilly Line 5.3% 8.7% 3.4% Unsafe at Any Speed 2.9% 13.7% 10.9%** Note: Control Group n=171, Treatment Group n=196; *p<0.05, **p<0.01, ***p<0.001 with a one-tailed test.13 As an additional test of the second hypothesis, I included three questions on the survey that were exceedingly difficult. For these questions, I expect that online respondents will significantly outperform their offline counterparts, as most individuals will lack the requisite background information to provide a correct answer. The three questions I asked, also identified in Appendix A, are: 1) Which actor or actress won the most Oscars for acting over their career? (The correct answer is Katharine Hepburn) 12.5 percent of respondents provided the correct answer 2) Which London Underground line goes to Heathrow International Airport? (The correct answer is Piccadilly) 7.1 percent of respondents provided the correct answer 3) In 1965, consumer activist Ralph Nader published Unsafe at Any Speed. This book is a critique of the safety record of which American-made automobile? (The correct answer is Corvair) 8.7 percent of respondents provided the correct answer

13

Almost every comparison, including those presented in Figure 1, is significant with a two-tailed test. The question asking about Katharine Hepburn is the only question that does not reach significance with a two-tailed test. Because I have a clear directional prediction, however, a one-tailed test is sufficient.

17

As the low percentages of correct answers indicate, these questions probed whether respondents had exceptionally deep knowledge of the subjects the questions asked about. Online subjects could, however, easily research the correct answer to the question with a few strokes of the keyboard and clicks of the mouse. While the second question regarding Londons Underground probably required a bit more research than the other two questions, a determined individual could still locate the correct answer in short order. As was the case with the difficult questions, a higher rate of correct answers for the online (treatment) group provides support for my hypothesis that online respondents are supplementing their knowledge with outside research. The results for these exceedingly difficult questions, also available in Table 2, suggest that this is indeed the case. For the first question concerning Katharine Hepburn, only 8.2 percent (n=14) of the control group provided a correct answer compared to 16.3 percent (n=32) of the control group. This difference is significant beyond the 95 percent confidence interval. With respect to the second question about Londons Underground, only 5.3 percent (n=9) of offline respondents answered correctly compared to 8.7 percent (n=17) of the online group. This difference is not significant, however, perhaps because finding the correct answer requires more than a cursory Google search. Finally, for the third question about Ralph Nader, 2.9 percent (n=5) of the control group knew the answer compared to 13.8 percent (n=27) of the treatment group. This difference is significant beyond the 99 percent confidence interval. In all, 59 respondents (30.1%) in treatment group answered at least one difficult question correctly, 15 respondents (7.6%) answered two or more exceedingly difficult questions correctly, and 2 respondents (1%) answered all three exceedingly difficult questions correctly. In the control group, only 26 respondents (15.2%) answered one of these difficult questions correctly, and only one person (0.6%) answered two exceedingly difficult questions correctly; no one in the control

18

group was able to answer all three correctly. While individuals may not research every answer, online respondents even when there is no incentive to do so appear to research the answers to many questions. I turn now to consider the importance of this finding. Discussion In this paper, I conducted an experiment to examine whether online respondents performed better on factual knowledge questions when compared with an offline control group. This research constitutes the first direct test of whether online respondents research answers. Previous studies (e.g., Ansolabehere and Schaffner 2011; Duffy et al. 2005; Prior and Lupia 2008; Strabac and Aalberg 2010) have hinted at the possibility that online respondents augment their responses. These scholars, however, have downplayed their observed differences in knowledge between online and offline respondents. In fact, the accepted explanation is that this difference exists because individuals who have Internet access are just more intelligent, an idea supported by studies examining the demographics of online users (e.g., Davis 1999; Kenski and Stroud 2006; Pew Internet and American Life Project 2011). This study, however, holds constant Internet access (all students at the university have access). I also provide no performance incentives, creating a true baseline for measuring the behavioral differences between online and offline respondents. My results show that online respondents do indeed behave differently when compared with offline respondents. I found that respondents who took my survey online were better able to answer factual knowledge questions about a variety of topics ranging from sports to consumer products. These results indicate that my first hypothesis is correct: individuals who have the freedom of time and resources perform better than those who do not.

19

A closer examination of online respondents behavior provides further evidence that they are indeed researching answers to the factual knowledge questions I asked. First, online respondents were significantly more likely to answer difficult questions (the questions that had a rating of 4 or 5 in Appendix A). Second, when I examined three exceedingly difficult questions, the online respondents performed much better (two of three were significantly different). Taken together, the evidence supported my second hypothesis that, when question difficulty increased, online respondents were more likely to supplement their knowledge with outside information. Overall, I find that, similar to the quote at the beginning of the article, individuals are researching answers to questions when there is nothing at stake but their ego. Perhaps the largest concern about this research is the reliance on a student sample. While studies have shown that student samples are valid for experimental work (e.g., Druckman and Kam 2011), some readers may wonder whether student samples are indicative of how adults would behave under similar conditions. In other words, are students more incentivized to research answers to questions because they are in a learning environment? Two reasons strongly suggests that I should find similar behavior among adults. First, my findings concur with the results of both Strabac and Aalberg (2011) and Fricker et al. (2005) who use adult samples. In particular, Strabac and Aalbergs online sample answered questions 7.4 percentage points more often; for Frick et al., the difference was 6.2 percentage points; in this study, the difference was 7.1 percentage points. These remarkably similar results strongly suggest that if I replicated my study with adults I should expect that my conclusions would hold. To be sure, having an adult sample complete the experiment in the lab would be ideal; incentivizing the subjects to do so, however, is beyond the resources available to most scholars. Second, similar to YouGov/Polimetrix and Knowledge Networks, my sample of students is an opt-in sample.

20

There are no strong theoretical reasons to expect that there would be fundamental behavioral differences between any given opt-in sample. That is, adults who opt-in to YouGov/Polimetrix are just as interested in getting the answer right as any college student (see Chambers and Wolf 1996; Sharp and Frankel 1983). A second important concern about my research design is that I did not place a time limit on how long individuals could consider each of the knowledge questions online. This was a specific design choice, as I wanted to measure what individuals would do under no constraints. Major survey outfits such as YouGov/Polimetrix and Knowledge Networks often require subjects to answer factual questions within a thirty-second timeframe, however. This fact is not a significant concern for my design. Strabac and Aalberg (2011) did include a thirty-second timer. As noted in the previous paragraph, our results were almost identical. Even with a timer, online subjects are significantly more likely to answer more questions correctly.14 Also, it is worth noting that, while many studies will use YouGov/Polimetrix and Knowledge Networks, many researchers rely on generating their own data or use other survey research firms that do not include timers. My results highlight an important behavioral concern for researchers conducting studies online, one that applies equally to observational and experimental research. For observational research, my findings reveal that respondents may be researching answers to questions that range from the trivial (e.g., who is the current Vice President of the United States) to complicated (e.g., Do you believe that imposing economic sanctions on Iran is sufficient to curb any aspirations for a weaponized nuclear program?). Perhaps, then, every online survey that asks respondents the standard slate of political knowledge questions is overestimating what people

14

This also implies that if researchers wish to avoid online subjects researching answers to factual questions they should limit the time to fewer than thirty seconds perhaps to as little as ten to fifteen seconds.

21

know. What is more disconcerting, however, is that it respondents in my survey were more likely to research difficult answers. By extension, it seems likely that survey respondents are researching the answers to important policy questions they have little knowledge about. With regard to experimental research, many will argue that online respondents augmenting their knowledge is less of a concern. For some types of experiments namely psychology online subjects researching answers is probably less problematic. For some questions, especially commonly used sets of questions (e.g., need for cognition and personality questions), enterprising individuals may investigate why the researcher would want to ask such questions. This knowledge could reveal the nature of the research, which, in turn, could lead the respondent to change her behavior. Similarly, specialized experimental conditions (e.g., a face blending experiment) could lead to respondents researching how the technology works. Again, this information could spoil the experiment. For economic experiments, the threat is more obvious and severe: Subjects could investigate the goal of the experiment and learn the best strategies to maximize their return. This problem may be exacerbated by the fact that most online respondents are already economically motivated because they have received an incentive to join the subject pool. Anecdotally, some of this contaminating behavior is already underway and is quite organized. A simple Google search of the phrase hits worth turking for reveals that a number of websites already exist to discuss the merits of tasks posted on MTurk. Reddit one of the top 50 websites in the United States by Alexa Ranking has an entire section dedicated to the discussion of MTurk tasks. In these discussions, it is common to see fellow workers not only discuss the content of the experiment or survey but also warn one another about potential attention checks. While not every respondent will engage in this behavior, my results suggest

22

that a significant number of unincentivized respondents do. Researchers should keep this threat to validity in mind when designing experiments. This research is a first step toward examining the behavior of online respondents. A number of extensions are necessary to understand how online survey participants and experiment subjects are approaching the tasks we ask them to perform. The first extension of this paper is to replicate this study with additional samples of subjects, using adults if possible. Gathering a sample of adults for a laboratory experiment was beyond the budget available for this research, however. A second extension is (1) to add questions that assess the individuals need for cognition and (2) to time subjects response times for each question, an option that was not available for this experiment. Third, a natural extension of this work is to replicate the design with an observational study that asks respondents to answer open-ended questions about both simple and complicated policy proposals. Open-ended questions would allow the researcher to explore whether and how the online respondents are supplementing their answers. Finally, the ability to track where individuals actually navigate to online when they are completing a study would be the most direct test of my theory. Tracking respondents when they confront factual knowledge questions, a psychological experiment, or an economics experiment would provide the most precise data on how individuals behave. Acquiring the technology and consent to conduct such a study was beyond the scope of this project. Combined, these extensions will greatly add to our understanding of the environment in which a great deal of academic and professional research occurs.

23

Works Cited AAPOR, Opt-in Online Panel Task Force. 2010. AAPOR Report on Online Panels. http://www.aapor.org/AAPOR_Releases_Report_on_Online_Survey_Panels1.htm. Alvarez, R. Michael, Robert P. Sherman, and Carla VanBeselare. 2003. Subject Acquisition for WebBased Surveys. Political Analysis 11: 23-43. Ansolabehere, Stephen, and Brian F. Schaffner. 2011. Does Survey Mode Still Matter? Findings from a 2010 Multi-Mode Comparison. Harvard University and University of Massachusetts, Amherst. Barberio, Richard P. 2004. The One-Armed Bandit Syndrome: Overuse of the Internet in Student Research Projects. PS: Political Science and politics 37: 307-311. Berinsky, Adam J., Gregory A. Huber, and Gabriel S. Lenz. 2012. Using Mechanical Turk as a Subject Recruitment Tool for Experimental Research. Political Analysis Forthcoming. Berrens, Robert P., Alok K. Bohara, Hank Jenkins-Smith, Carol Silva, and David L. Weimer. 2003. The Advent of Internet Surveys for Political Research: A Comparison of Telephone and Internet Samples. Political Analysis 11: 1-22. Buhrmester, Michael D., Tracy Kwang, and Samuel D. Gosling. 2011. Amazons Mechanical Turk: A New Source of Inexpensive, yet High-Quality, Data? Perspectives on Psychological Science 6: 3-5. Cassese, Erin C., Leonie Huddy, Todd K. Hartman, Lily Mason, and Christopher Weber. 2011. Socially-Mediated Internet Surveys (SMIS): Recruiting Participants for Online Experiments. West Virginia University. Chambers, Edgar, and Mona Baker Wolf. 1996. Sensory Testing Methods. Philadelphia, PA: ASTM International.

24

Chang, LinChiat, and Jon A. Krosnick. 2009. National Surveys via RDD Telephone Interviewing Versus the Internet: Comparing Sample Representativeness and Response Quality. Public Opinion Quarterly 73: 641-678. Chang, LinChiat, and Jon A. Krosnick. 2010. Comparing Oral Interviewing with SelfAdministered Computerized Questionnaires: An Experiment. Public Opinion Quarterly 74: 154-167. Couper, Mick P. 2000. Web Surveys: a Review of Issues and Approaches. Public Opinion Quarterly 64: 464-494. Couper, Mick P. 2011. The Future of Modes of Data Collection. Public Opinion Quarterly 75: 889-908. Curtin, Richard, Stanley Presser, and Eleanor Singer. 2005. Changes in Telephone Survey Nonresponse over the Past Quarter Century. Public Opinion Quarterly 69: 87-98. Davis, Richard. 1999. The Web of Politics: The Internets Impact on the American Political System. New York: Oxford University Press. Delli Carpini, Michael X., and Scott Keeter. 1996. What Americans Know About Politics and Why It Matters. New Haven, CT: Yale University Press. Drolet, Aimee L., and Michael W. Morris. 2000. Rapport in Conflict Resolution: Accounting for How Face-to-Face Contact Fosters Mutual Cooperation in Mixed-Motive Conflicts. Journal of Experimental Social Psychology 36: 26-50. Druckman, James N., and Cindy D. Kam. 2011. Students as Experimental Participants: A Defense of the Narrow Data Base. In Cambridge Handbook of Experimental Political Science, eds. James N. Druckman, Donald P. Green, James H. Kuklinski and Arthur Lupia. Cambridge, UK: Cambridge University Press.

25

Duffy, Bobby, Kate Smith, George Terhanian, and John Bremer. 2005. Comparing Data from Online and Face-to-face Surveys. International Journal of Market Research 47: 615-639. Fricker, Scott, Mirta Galesic, Roger Tourangeau, and Ting Yan. 2005. An Experimental Comparison of Web and Telephone Surveys. Public Opinion Quarterly 69: 370-392. Groves, Robert M., and Robert L. Khan. 1979. Surveys by Telephone: A National Comparison with Personal Interviews. New York: Academic Press. Heerwegh, Dirk, and Geert Loosveldt. 2008. Face-to-face Versus Web Surveying in a HighInternet-Coverage Population: Differences in Response Quality. Public Opinion Quarterly 72: 836-848. Hill, Seth J., James Lo, Lynn Vavreck, and John Zaller. 2007. The Opt-in Internet Panel: Survey Mode, Sampling Methodology and the Implications for Political Research. University of California, Los Angeles. Himmelfarb, Samuel, and Carl Lickteig. 1982. Social Desirability and the Randomized Response Technique. Journal of Personality and Social Psychology 43: 710-717. Holbrook, Allyson L., Jon A. Krosnick, and Alison M. Pfent. 2007. Response Rates in Surveys by the Media and Government Contractor Research Firms. In Advances in Telephone Survey Methodology, eds. James M. Lekowski, Clyde Tucker, J. Michael Brick, Edith D. de Leeuw, Lilli Japec, Paul J. Lavrakas, Michael W. Link and Roberta L. Sangster. New York: Wiley. 499-528. Jackson, Nancy Beth. 2003. Opinions to Spare? Click Here. In The New York Times. July 3, 2003. Kenski, Kate, and Natalie Jomini Stroud. 2006. Connections Between Internet Use and Political Efficacy, Knowledge, and Participation. Journal of Broadcasting & Electronic Media 50:

26

173-192. Krosnick, Jon A. 1991. Response Strategies for Coping with the Cognitive Demands of Attitude Measures in Surveys. Applied Cognitive Psychology 5: 213-36. Malhotra, Neil, and Jon A. Krosnick. 2007. The Effect of Survey Mode and Sampling on Inferences about Political Attitudes and Behavior: Comparing the 2000 and 2004 ANES to Internet Surveys with Nonprobability Samples. Political Analysis 15: 286-323. Paulhus, Delroy L. 1984. Two-Component Models of Socially Desirable Responding. Journal of Personality and Social Psychology 46: 598-609. Pew Internet & American Life Project. 2011. Demographics of Internet Users. http://pewinternet.org/Trend-Data/Whos-Online.aspx (February 2, 2012). Prior, Markus, and Arthur Lupia. 2008. Money, Time, and Political Knowledge: Distinguishing Quick Recall and Political Learning Skills. American Journal of Political Science 52: 169-183. Sharp, Laure M., and Joanne Frankel. 1983. Respondent Burden: A Test of Some Common Assumptions. Public Opinion Quarterly 47: 36-53. Strabac, Zan, and Toril Aalberg. 2010. Measuring Political Knowledge in Telephone and Web Surveys: A Cross-National Comparison. Social Science Computer Review 28: 175-192. Yeager, David S., Jon A. Krosnick, LinChiat Chang, Harold S. Javitz, Matthew S. Levendusky, Alberto Simpser, and Rui Wang. 2011. Comparing the Accuracy of RDD Telephone Surveys and Internet Surveys Conducted with Probability and Non-Probability Samples. Public Opinion Quarterly 75: 709-747.

27

Appendix A Knowledge Questions for Appalachian State University Survey of 367 Undergraduates Correct answers in boldface font. Each questioned ranked 1-5 in parentheses denoting question difficulty, where 1=very easy and 5=very difficult. Percent of responses indicated in parentheses next to each response. A * denotes an exceptionally difficult question. Sports What sport do the New York Yankees play? (1) A. B. C. D. E. Basketball (0.8%) Baseball (98.9%) Football (0%) Hockey (0.3%) Dont Know (0%)

What leagues championship game is the Super Bowl? (1) A. B. C. D. E. Major League Baseball (0.3%) National Basketball Association (0%) National Football League (98.6%) National Hockey Association (0.3%) Dont Know (0.8%)

Which professional sports team does Kobe Bryant play on? (2) A. B. C. D. E. Los Angeles Dodgers (0.3%) Boston Celtics (0%) Los Angeles Lakers (85.3%) Miami Heat (8.5%) Dont Know (6%)

Which city is the home of the National Football Leagues Jaguars? (2) A. B. C. D. E. Tampa Bay, FL (1.9%) Washington, DC (1.1%) Cincinnati, OH (5.5%) Jacksonville, FL (67.3%) Dont Know (24.3%)

What does the hockey term hat trick mean? (3) A. When a Goalie Blocks a Shot with his Helmet (4.9%) B. When a Player Scores a Wrap-Around Goal (5.2%)

28

C. When a Player Scores Three Goals in a Single Game (55.9%) D. When a Forward Commits a Boarding Penalty (0%) E. Dont Know (34.1%) How often does the FIFA World Cup occur? (3) A. B. C. D. E. Every Two Years (17.7%) Every Four Years (55.6%) Every Six Years (1.4%) Every Eight Years (0.3%) Dont Know (25.1%)

In which country was golf invented? (4) A. B. C. D. E. United States (1.6%) Wales (7.1%) England (10.6%) Scotland (54.2%) Dont Know (26.4%)

Which of the following major tennis championship has grass courts? (4) A. B. C. D. E. US Open (3.3%) French Open (7.6%) Wimbledon (48.9%) Australian Open (5.7%) Dont Know (36.5%)

Over their career, which of the following players has won the most major championships on the Professional Golfers Association (PGA) Tour? (5) A. B. C. D. E. Arnold Palmer (14.7%) Jack Nicklaus (22.1%) Tiger Woods (46.9%) Ben Hogan (0.3%) Dont Know (16.1%)

Which of the following is the third leg of the Triple Crown in US horse racing? (5) A. B. C. D. E. Breeders Cup (2.5%) Kentucky Derby (33.2%) Preakness Stakes (7.9%) Belmont Stakes (12.3%) Dont Know (44.1%)

29

American Politics Who is the current president of the United States? (1) A. B. C. D. E. George W. Bush (0%) Bill Clinton (0%) Barack Obama (100%) John McCain (0%) Dont Know (0%)

How long is a presidents term in office? (1) A. B. C. D. E. Two Years (2.5%) Four Years (96.5%) Six Years (0%) Eight Years (1.2%) Dont Know (0%)

Do you happen to know what job or political office is now held by Joe Biden? (2) A. B. C. D. E. Secretary of State (3.5%) Secretary of Defense (2.2%) Speaker of the House (1.1%) Vice President (86.7%) Dont Know (6.5%)

How much of a majority is required for the U.S. Senate and House to override a presidential veto? (2) A. Simple Majority (1.1%) B. Two-Thirds Majority (84.2%) C. Three-Fifths Majority (9.3%) D. Four-Fifths Majority (0.3%) E. Dont Know (5.2%) F. Whose responsibility is it to determine if a law is constitutional or not? (3) A. B. C. D. E. The Supreme Court (79.3%) The President (1.9%) The Senate (9.3%) The House of Representatives (3.3%) Dont Know (6.3%)

30

Do you happen to know which party has the most members in the House of Representatives in Washington, DC? (3) A. B. C. D. E. Republican (59.1%) Democratic (30%) Independent (0%) Green (0%) Dont Know (10.9%)

Who is the current Speaker of the House of Representatives? (4) A. B. C. D. E. Nancy Pelosi (33%) Eric Cantor (1.1%) Tip ONeill (1.4%) John Boehner (55%) Dont Know (9.5%)

Who is the current White House Chief of Staff? (4) A. B. C. D. E. Andrew Card (3.8%) William Daley (23.4%) Ezra Klein (1.4%) Rahm Emmanuel (15.8%) Dont Know (55.6%)

How many associate justices sit on the Supreme Court? (5) A. B. C. D. E. Six (10.6%) Seven (9%) Eight (19.9%) Nine (46.3%) Dont Know (14.2%)

Who was the first woman to run for Vice President of the United States? (5) A. B. C. D. E. Sarah Palin (42.8%) Shirley Chisholm (2.7%) Geraldine Ferraro (18.5%) Hillary Clinton (22.9%) Dont Know (13.1%)

31

Pop Culture Who plays Captain Jack Sparrow in the Pirates of the Caribbean film series? (1) A. B. C. D. E. Johnny Depp (99.5%) Brad Pitt (0.3%) Johnny Damon (0.3%) Bradley Cooper (0%) Dont Know (0%)

Who is the founder of Playboy magazine? (1) A. B. C. D. E. Larry Flynt (0%) Hugh Hefner (95.1%) Bob Guccione (0.5%) Howard Hughes (0.5%) Dont Know (3.8%)

Which of the following countries is where reggae music originated? (2) A. B. C. D. E. Jamaica (88.6%) Bermuda (0.3%) Bahamas (2.5%) Cuba (1.4%) Dont Know (7.4%)

Who created the Star Wars film franchise? (2) A. B. C. D. E. Francis Ford Coppola (0.3%) Steven Spielberg (14.7%) George Lucas (66.8%) Martin Scorsese (0.5%) Dont Know (17.7%)

Which artist had a 1987 hit with the song Bad? (3) A. B. C. D. E. George Michael (2.2%) Michael Jackson (58.3%) Prince (7.9%) Lionel Richie (1.1%) Dont Know (30.5%)

32

The Oprah Winfrey Show was filmed in what United States city for 25 seasons? (3) A. B. C. D. E. New York (13.9%) Los Angeles (12.8%) Miami (1.4%) Chicago (52%) Dont Know (19.9%)

Which musical artist sold the most albums during the 2000s (2000-2009)? (4) A. B. C. D. E. Eminem (23.7%) Jay-Z (18.3%) Britney Spears (23.7%) Tim McGraw (5.5%) Dont Know (28.9%)

Which instrument did Miles Davis play? (4) A. B. C. D. E. Trombone (4.1%) Trumpet (31.3%) Saxophone (25.3%) Bugle (0.3%) Dont Know (39%)

Which actor or actress won the most Oscars for acting over their career? (5)* A. B. C. D. E. Tom Hanks (25.3%) Humphrey Bogart (6.5%) Katharine Hepburn (12.5%) Meryl Streep (16.6%) Dont Know (39%)

What was Princess Dianas maiden name? (5) A. B. C. D. E. Spencer (19.6%) Ferguson (10.6% Windsor (19.9%) Churchill (3.8%) Dont Know (46.1%)

33

Rules of the Road What is the primary color of a stop sign? (1) A. B. C. D. E. Yellow (0%) Orange (0%) Red (100%) Green (0%) Dont Know (0%)

In North Carolina, what is the BAC (Blood Alcohol Content) limit for drivers 21 years or older? (2) A. B. C. D. E. 0.06 (6.5%) 0.08 (86.7%) 0.10 (0.5%) 0.12 (0.3%) Dont Know (6%)

What drivers license must one acquire in order to operate a tractor trailer? (3) A. B. C. D. E. Motorcycle Learner Permit (MLP) (0%) Farm Equipment Permit (FEP) (18.3%) Commercial Drivers License (CDL) (65.1%) Standard Drivers License (No special endorsement required) (3.5%) Dont Know (13.1%)

In North Carolina, which of the following is the only road sign shaped like a pennant? (4) A. B. C. D. E. No Passing Zone (55%) No Left Turn (0%) No Right Turn (1.4%) Railroad Crossing (15.3%) Dont Know (28.3%)

In what state is a driver not legally required to wear a seat belt? (5) A. B. C. D. E. Montana (9%) Rhode Island (3%) Utah (1.6%) None of the Above (41.4%) Dont Know (45%)

34

Economics What is the official currency of the United States? (1) A. B. C. D. E. Pound Sterling (0%) Euro (0%) Yen (0%) Dollar (100%) Dont Know (0%)

What is the federal minimum wage in the United Stated? (2) A. B. C. D. E. $6.50 (2.5%) $6.75 (4.1%) $7.00 (2.2%) $7.25 (88%) Dont Know (3.3%)

Which city is the location of the largest stock exchange in the world? (3) A. B. C. D. E. London (3%) New York City (83.4%) Paris (0.3%) Moscow (1.6%) Dont Know (11.7%)

Which classical economist theorized about the invisible hand of free markets? (4) A. B. C. D. E. John Stuart Mill (3.3%) Karl Marx (24.8%) Adam Smith (24%) John Maynard Keynes (8.2%) Dont Know (39.8%)

What was the Gross Domestic Product (GDP) for the United States in 2010? (5) A. B. C. D. E. 5.7 Trillion (4.4%) 14.7 Trillion (14.7%) 23.6 Trillion (13.1%) 42.5 Trillion (4.6%) Dont Know (63.2%)

35

Geography Which of the following countries lies on the northern border of the United States? (1) A. B. C. D. E. Greenland (1.1%) Norway (0%) Canada (98.4%) Sweden (0%) Dont Know (0.5%)

Which state, in terms of total landmass, is the second largest in the United States? (2) A. B. C. D. E. Texas (64.9%) Montana (1.4%) California (16.1%) Alaska (12.3%) Dont Know (5.5%)

In what city is The Alamo located? (3) A. B. C. D. E. Dallas (2.5%) San Antonio (55%) Houston (9%) Santa Fe (15.3%) Dont Know (18.3%)

Which of the following countries ranks second in the world in terms of total land mass? (4) A. B. C. D. E. China (29.2%) Russia (21.3%) Canada (22.6) Untied States (11.4%) Dont Know (15.5%)

Which London Underground line goes to Heathrow International Airport? (5)* A. B. C. D. E. Piccadilly (7.1%) Soho (4.3%) London East (9.8%) Victoria (4.1%) Dont Know (74.7%)

36

Consumer Knowledge Which type of fuel is used in the majority of automobiles in the United States? (1) A. B. C. D. E. Diesel (1.1%) Leaded Gasoline (1.9%) Ethanol (2.5%) Unleaded Gasoline (92.1%) Dont Know (2.5%)

Which of the following forms of consumer activism is the act of voluntarily abstaining from using or buying from a specific company? (1) A. B. C. D. E. Strike (2.7%) Boycott (91.3%) Protest (3.3%) Recall (0.8%) Dont Know (1.9%)

Which unit of measurement is used to determine the interior dimensions of a house? (2) A. B. C. D. E. Cubic Feet (11.7%) Cubic Yards (1.4%) Square Feet (83.7%) Square Yards (0.5%) Dont Know (2.7%)

Which unit of measurement do electric companies use to determine individual usage? (2) A. B. C. D. E. Amperage (9.8%) Volt Hours (14.7%) Nanometers (1.1%) Kilowatt Hours (54%) Dont Know (20.4%)

When shopping for a new car, the sales sticker usually provides the consumer with the MSRP. What does MSRP stand for? (3) A. B. C. D. E. Manufacturers Suggested Retail Price (65.1%) Manufacturers Standard Retail Price (18.5%) Merchants Sales Retail Price (2.2%) Motor Standards Relative Prospectus (1.1%) Dont Know (13.1%)

37

How many calories are in a McDonalds Big Mac? (3) A. B. C. D. E. 340 (0.3%) 440 (1.9%) 540 (37.6%) 640 (36.8%) Dont Know (23.4%)

For which of the following do most credit card companies charge the highest interest rates? (4) A. B. C. D. E. Cash Advances (29.2%) Purchases (3.5%) Monthly Billing Fees (12.8%) Service Charges (15.5%) Dont Know (39%)

What is the tuition (not including fees) for a full-time, in-state undergraduate at Appalachian State University for the 2011-2012 school year? (4) A. B. C. D. E. $1,376 (5.5%) $1,476 (8.7%) $1,576 (26.7%) $1,676 (17.2%) Dont Know (42%)

Which fruit has the most calories per 100 grams? (5) A. B. C. D. E. Raw Avocado (18.3%) Banana (11.4%) Mango (12%) Coconut Meat (16.1%) Dont Know (42.2%)

In 1965, consumer activist Ralph Nader published Unsafe at Any Speed. This book is a critique of the safety record of which American-made automobile? (5)* A. B. C. D. E. Nova (6%) Corvette (18.5%) Corvair (8.7%) Camaro (9%) Dont Know (57.8%)

38

Das könnte Ihnen auch gefallen