“The Effect of Anecdotes on Science Evidence Evaluation”
Yiwen Zhong, University of Michigan, Department of Psychology
Abstract
Informed citizens are expected to use evidence, frequently presented in the news media, to inform decisions about health, behavior, and public policy. News media stories often include anecdote ledes to frame and provide color to discussions of science. This series of three experiments addresses how anecdotes in news stories about science influenced readers’ evaluations. The news studies were about the impact of one of two educational intervention (tidy classrooms and exercise). Study 1 tested the impact of a single brief anecdote and found no significant effect; participants rated news reports as equally persuasive with and without anecdotes. Study 2 that when two anecdotes were included in a news story, participants stated that they were more likely to adopt an educational intervention than if there were no anecdotes. Study 3 found that the source of the anecdote differentially affected its persuasiveness; participants found anecdotes to be most persuasive when they were about a teacher’s classroom experience. Together, these studies suggest that how a news story is framed has an impact on readers’ evaluations of science.
Introduction and Background
Scientific reasoning involves the process of thinking critically and reasoning according to contexts (Facione, 1991). Nowadays, although it is easier to have access to newly-published studies, the increasing amount makes it difficult to choose what to believe and what not. Research has shown that conflicting results can always appear even in case-control research (Mayes et al., 1988), the phenomenon that draws emphasis on individual decision making so that people will not fall into victims by the false information. While conducting or introducing a study, researchers may become biased in certain ways, thus it requires readers to evaluate their claim according to the evidence researchers provided. There may be a biased sample, a flawed experimental design, an overly-inclusive conclusion, etc. that can lead to biased research outcome. While these flaws are hard to identify, what is worse is that some researchers may include anecdotes in their studies before publishing it on to the popular press to persuade lay people in believing the study results. Anecdotal information, which refers to information given by sharing personal experience, such as telling a story, tend to be eye-catching and easy for the mind to process.
Studying the effect of anecdotes is important because anecdotes are commonly used in everyday life; they are a vivid representation of what has already happened, and that can serve as guidance for future actions. Past studies have shown that when anecdotal information are presented at the beginning of a scientific article instead of descriptive texts, people are more likely to rate the article as convincing and persuasive (Rodriguez, Rhodes, Miller, & Shah, 2016). Moreover, participants tend to use less methodological analysis when anecdotes are presented; namely, they engage less in scientific reasoning. Although there are some work that focused on the impact of anecdotal information on scientific reasoning (Rodriguez, Rhodes, Miller, & Shah, 2016; Koballa, 1986; Herr, Kardes, & Kim, 1991), there are few studies that specifically examines the effect in the field of education.
Understanding the effect of anecdotes in education is important because education is what makes the next generation outgrow the current one. On one hand, the reasoning ability of teachers may serve as a role model that can potentially influence children’s thinking style. On the other hand, for teachers, there can be lots of studies coming out annually regarding new teaching methods that are claimed to be effective. Evaluating the quality of those studies before choosing whether to incorporate or not is essential for actually improving teaching quality.
Overview
The goal of this study is to examine the effect of anecdotal information on scientific reasoning, especially in the field of education. Experiments conducted in this study was intended to assess people’s reasoning ability when facing an educational study with anecdotal information as an introduction compared to only descriptive text.
Method
Participants
The participants for this study were students and teachers. Two hundred and forty-five students (135 females, 100 males, 10 not recorded) were recruited from the University of Michigan Introductory Psychology Subject Pool. The average age of the students is 18.77 years (SD=1.21), ranging from 17 to 26. Six teachers were recruited through recruiting fliers. Student participants were granted half an hour of credit for participating, and teacher participants were each given a 5-dollar Starbucks e-gift card.
Procedure
Participants completed the study either online or in the research lab, and materials were given using Qualtrics. All participants were instructed to read 2 articles in total but we designed four ways to present these 2 articles. Participants were randomly assigned to one of the conditions: 1) article A with story, article B without; 2) article A without story, article B with; 3) article B without, article A with; 4) article B with, article A without. After reading each article, participants first answered a comprehension question in the form of multiple choice. Then the article would appear again and participants were asked to rate the strength of evidence, persuasiveness, and the likelihood of incorporating the technique discussed in the article into real class settings. They also answered open-ended questions regarding reasons for their rating. In the end, they answered questions about background information, including gender, age, and the highest level of statistics class they have taken in college.
Materials
The educational articles used in this study serve as an introduction to new teaching methods. There were two articles of different teaching techniques, one of which was adopted from research on exercise’s positive influence in language learning (Reynolds, 2017), while the other one on tidy room boosting academic performance was created by the researcher. The articles were designed to resemble a popular online journal, with a headline, the author’s name, date, a picture, and an one-page long article. The articles started with a brief introduction or an anecdotal story related to the gist, followed by the research study. Two articles of different scenarios as well as the anecdotal and descriptive starting paragraph were made roughly the same length to rule out the possibility that longer articles increase persuasiveness. Experimental flaws on procedures, measures, and results were deliberately embedded into both research studies.
Anecdotal paragraph: In the first study, anecdotes were consisted of one story that favors the new teaching method. In the second study, anecdotes were modified, either by adding another story that favors the teaching technique or another story that contradicts the study result. In the third study, anecdotes were either given as recommendations by scientists or teachers who have already started to implement the new method.
Results
Three studies examined the impact of anecdotes on science evidence evaluation reported in media articles. In each study, participants read two articles. One was about the impact of having a tidy classroom on student learning, and the other was about the impact of exercise on student learning. Participants were asked to rate the articles in terms of their persuasiveness, and also asked whether or not they would be likely to implement the recommendations in the article if they were teachers. Each participant read one study description with an anecdote (or multiple anecdotes) and one without; which story had the anecdote was counterbalanced as was order. Study 3 included an additional between-subjects variable, whether the anecdote was about a teacher or a scientist.
Study 1: Study 1 compared the impact of a single anecdote on science evidence evaluation.
For evidence strength and persuasiveness ratings, there was a significant main effect of scenario (both p’s < 0.001), such that the exercise scenario had higher ratings than the tidy room scenario. However, this effect was reversed for the likelihood of incorporate rating (p < 0.001), such that participants were more likely to adopt the tidy room intervention than the exercise intervention in a hypothetical classroom. There was no effect of anecdote presence, and there were no significant differences in rates of mentioning the study for explanations of the incorporate rating.
Study 2: Study 2 compared the ipact of two anecdotes on science evidence evaluation.
For evidence strength and persuasiveness ratings, we replicated the main effect of scenario from Study 1, such that the exercise scenario had higher mean ratings than the tidy room scenario (both p’s < 0.05). This preference for the exercise scenario again reversed for the likelihood of incorporate rating, with significantly higher mean ratings for the tidy room scenario (p < 0.001). We additionally observed a main effect of anecdote presence on incorporate ratings (p < 0.01), such that participants were more likely to adopt a given intervention if the scenario contained multiple anecdotes than no anecdotes. There was an interaction between anecdote presence and scenario on rates of mentioning the study in explanations of incorporate ratings (p < 0.05, Figure 1), such that participants were more likely to mention the study in explanations when they saw anecdotes, but only for the exercise scenario. There was no effect of anecdote conflict on any rating/explanation outcomes.
Study 3: Study 3 compared the presence and absence of anecdotes as well as whether or not the anecdote was about a teacher or a scientist.
We replicated the scenario effect of higher ratings for the exercise than tidy room scenario, but only for the evidence strength rating (p < 0.01). Additionally, there was a significant three-way interaction between anecdote presence, anecdote type and scenario for evidence strength ratings (p < 0.01, Figure 2). Participants showed a stronger anecdote effect (higher ratings for anecdote present than absent) when the anecdote came from a teacher for the exercise scenario, whereas participants showed a stronger anecdote effect when the anecdote came from a scientist for the tidy room scenario. A three-way interaction was also observed for persuasiveness ratings (p < 0.05), showing a similar pattern as the evidence strength data. For incorporate likelihood ratings, there was a main effect of anecdote presence (p < 0.05), with higher overall ratings when an anecdote was present than absent. There was also a main effect of scenario, with higher ratings for the tidy room than the exercise scenario (p < 0.001), replicating previous findings. Finally, there were significant interactions between anecdote presence and anecdote type (p < 0.05) as well as between anecdote presence, anecdote type and scenario (p < 0.05, Figure 3). Overall, the anecdote effect was stronger when the anecdote came from a teacher than a scientist. However, this pattern varied with scenario; there were strong anecdote effects for both scientist and teacher anecdotes for the tidy room scenario, whereas only the teacher anecdote produced an anecdote effect for the exercise scenario. No effects were observed for rates of mentioning the study in explanations.
Discussion
Evidence-based versus experiential decision-making
Although participants placed more trust in the science behind the exercise intervention, they were more likely to adopt the tidy room intervention in a hypothetical classroom. This suggests that participants were less likely to rely on evidence strength or persuasiveness as factors in their decision to adopt a given intervention. Rather, explanations more frequently cited participants’ own experience and/or the plausibility of the intervention than the strength of the study as reasons for choosing/not choosing the intervention.
Presence of anecdotes
We only observed anecdote effects in Studies 2 and 3 for the incorporate likelihood rating; participants were more likely to adopt a given intervention when it was accompanied by an anecdote than not. This suggests that including a single anecdote from an unfamiliar/untrusted source may not be as effective as including multiple anecdotes or a single anecdote from an expert.
Anecdote consistency
Study 2 tested whether having multiple consistent than inconsistent anecdotes would lead to higher study ratings and/or likelihood of adopting an intervention. Despite the fact that participants were more persuaded by the study in the presence than absence of anecdotes overall, we found no consistency effects on any rating outcomes, suggesting that even conflicting anecdotes are still persuasive.
Anecdote source
In Study 3, participants trusted and were persuaded by the evidence for the exercise intervention more when they viewed an anecdote, but only when that anecdote came from a teacher. In contrast, participants endorsed the tidy room evidence more with an anecdote, but only when that anecdote came from a scientist. These results could possibly reflect plausibility differences between the two scenarios, such that the teacher anecdote boosts trust in the less plausible exercise intervention, whereas the scientist anecdote boosts trust in the more plausible tidy room intervention. The anecdote source also affected participants’ likelihood of incorporating a given intervention, such that a teacher anecdote boosted adoption of the intervention for both the exercise and tidy room scenarios. However, only the tidy room scenario additionally benefited from a scientist anecdote. This suggests that people are generally more likely to rely on anecdotes from experts who have experience in the domain of the research study (e.g., teachers are experts in educational settings) when making decisions. However, in the case of the highly plausible tidy room scenario, any type of supporting anecdote was sufficient to sway decisions about incorporating the intervention. This further suggests that whether people rely on anecdotes in the first place when making decisions may depend on the plausibility of the intervention.
Impact Statement
In the era of fake news, it is important to establish the conditions under which readers are best able to evaluate scientific evidence and the degree to which their evaluations are impacted by superfluous factors such as anecdotes. This series of three studies demonstrate that news stories about science with multiple personal anecdotes can alter people’s evaluations of data.