When facts are not enough

UMaine Professor Robert Wheeler asked me to present on science communication and risk perception to his Infectious Disease class. Because of Dr. Wheeler’s expertise and other topics covered in the class, I used the case of vaccines and autism to illustrate my points. The following is a summary of the lecture.

Science has evolved to generate consensus about what is true and what is not. A wealth of scientific truths have been generated and transferred to the public in the last half-century. Yet neither public support for research nor scientific literacy has increased significantly over the same time period. Despite the best efforts of science writers like myself and the new generation of scientist-communicators, more and better information does not = knowledge.


A lack of understanding about how science works is partly to blame for people believing things that science says are not true, but that’s not the whole story.

Even when people have the right information, they don’t always make the decision that scientists would favor. In the last decade or so, we have learned from psychologists and communication scientists that people view information through the “frames” of their values and beliefs. They trust messengers and seek out sources of information with whom they already feel aligned: friends, relatives, neighbors, pastor, political party, etc.

While this has probably always been true, a concurrent trend is magnifying the issue: the unprecedented access that we all have to information, and misinformation.

As I mentioned in the beginning of this post, I am using the example of autism to illustrate how people maintain false beliefs in the face of facts.

Autism is a sensitive and serious topic. The dramatic increase in children with autism spectrum disorders is very real and very scary. The latest risk factor, tracked by the CDC, is 1 in 110. While increased understanding and diagnoses partly explains the rise in cases (about 25%), it can’t explain all of the increase.

graph showing 600% increase in autism

So, in the 1990s many people were trying to figure out what was causing this increase.

In 1998, a study was published proposing that autism was linked to the MMR vaccine. Even though most of the authors later retracted the study’s conclusions (because the experiment used a small sample and had no control group), a link had been made in the minds of many concerned parents (see Gerber and Offit for the details of this story). Around the same time, an EPA study of mercury exposure discussed thimerosal, a preservative in vaccines that contained mercury. Mercury is a neurotoxin, and so it was easy for parents to seize on the idea that not only was the MMR vaccine a risk, but all vaccines because they have mercury in them. In 1999, despite little evidence that thimerosal-containing vaccines were in any way associated with autism or other harm, authorities ordered their phase-out in favor of thimerosal-free alternatives.

By the end of 2001, the only childhood vaccines still containing thimerosal were flu vaccines, and few children received them. This precautionary step, coupled with a public already concerned by a proposed but unsubstantiated link between vaccination and autism, understandably provoked concern among parents (Gerber and Offit).

But what happened to autism diagnoses?

They continued to rise.

Dozens and dozens of studies conducted since 2001 have looked at vaccines and autism and the science does not support any connection.

“The data support a conclusion of no association between thimerosal-containing vaccines and autism in children,” Dr. Sarah Parker, Journal of Pediatrics (2004).

“There is insufficient evidence to suggest that the MMR vaccine causes autism, either due to the vaccine itself, the presence of thimerosal, or the simultaneous administration of multiple vaccines,” Heather Coates, Medical Reference Services Quarterly (2009).

“20 epidemiologic studies have shown that neither thimerosal nor MMR vaccine causes autism,” Dr. Paul Gerber, Vaccines (2009).

Many of the scientists who authored these studies have made the point that concern about vaccines has diverted attention—and research funding—away from efforts to determine the real cause or causes of autism (e.g., genetic influence, other environmental factors).

So why is there still such fear of vaccines? Why hasn’t attention shifted to more probable causes?

Why do people still believe there is a link? How do we support our false beliefs?

1. Vaccines are given at the very same time that autism is diagnosed, so correlation easily becomes causation. Our brains seem to be wired to notice patterns, to believe that two things that happen at the same time are related. After a mercury link was ruled out, alternative theories emerged, such as the idea that the simultaneous administration of multiple vaccines overwhelms or weakens the immune system and creates an interaction with the nervous system that triggers autism in susceptible children. (Although the number of vaccines administered to children has increased, the total immunologic load has decreased.)

2. We have lost social memory of childhood diseases.

graph showing declines in childhood diseases

Vaccination is considered one of the major success stories in public health. Vaccines have eliminated diseases such as polio and small pox, and prevented outbreaks of mumps, measles, hepatitis, whooping cough. Because these diseases are no longer common, we have forgotten them. The new generation of parents has no memory of these diseases from their own childhood, and so may question the need for vaccines. Side effects or reactions to vaccines appear more common than the diseases the vaccines are preventing, increasing the perception that vaccines represent a risk. The fear of vaccines has replaced fear of childhood diseases.

A theoretical, even disproved risk such as getting autism from a vaccine, is elevated above a real risk of being hospitalized or killed by influenza, or promoting an outbreak of diseases like whooping cough or mumps. Harm resulting from immunization is less acceptable than potential harm from not immunizing. This is a classic example of “omission bias” in which the idea that causing harm through action (commission) is less acceptable than harm that results from inaction (omission). It is better to do nothing (Amanna and Slifka).

Psychology offers additional explanation of how people obtain and process information. Much of this research is focused on political beliefs, but is relevant to any factual information. Unsubstantiated beliefs are maintained via “motivated reasoning,” which suggests that rather than search rationally for the truth, people actually seek out information that confirms what they already believe.

Even when confronted with facts, people don’t necessarily change their belief.

People interpret facts differently; often this difference is illustrated by political affiliation and ideology (Democrat vs. Republican). If a new fact agrees with your belief, you might interpret the fact in an accepting way that strengthens your belief. If the new fact contradicts your belief, you might interpret it in a defensive way and resist changing your belief. Interpretations give individuals leeway to align facts with undeniable realities and yet continue to justify their beliefs and opinions. In the case of the Iraq War, Republicans and Democrats differed in their interpretations of the fact that the United States did not find weapons of mass destruction. Democrats concluded that the weapons did not exist, supporting their opposition to the war. Republicans interpreted the fact to mean that the weapons had been destroyed or moved or not yet found, thus maintaining the rationale for the invasion (Gaines et al. 2007).

When confronted with the fact that mercury is no longer in vaccines, a worried parent might interpret that vaccines themselves, not mercury, are the problem.

Another study looked at whether people change their beliefs when presented with corrected information. Such corrections can actually strengthen misperceptions. The truth can backfire (Nyhan and Reifler).

However, there does seem to be a “tipping point” when enough facts and broader public sentiment, which trigger anxiety, can lead people to change their beliefs (Redlawsk et al.).

More basic emotions also play a role: fear, anxiety, desire.

A study in the journal Sociological Inquiry (Prasad) looked at the strength and resilience of the belief among many Americans that Saddam Hussein was linked to the terrorist attacks of 9/11. The authors concluded that the belief was the result of an urgent need by many Americans to seek justification for a war already in progress. This study argues that the primary cause of misperception in the 9/11-Saddam Hussein case was not the presence or absence of accurate data, but a respondent’s desire to believe in particular kinds of information.

We get attached to our beliefs. They become part of our identity. And so for the people interviewed in this study, the overwhelming evidence that there was no link between Saddam and the 9/11 attacks had no influence on their belief that such a link existed. It had to be true for people to make sense of the war.

So for parents crushed by an autism diagnosis, “there must be a reason.”

The intense desire for an autism cure often leads parent groups and nonprofit organizations to endorse causes and treatments without sufficient evidence of effectiveness. And, for better or worse, it is easier than ever to find (mis)information that supports your belief.


Black, Steven, and Rino Rappuoli. 2010. A Crisis of Public Confidence in Vaccines. Science Translational Medicine 2:1-6.

Gaines, Brian J., et al. 2007. Same Facts, Different Interpretations: Partisan Motivation and Opinion on Iraq. The Journal of Politics 69:957-974.

Nyhan, Brendan, and Jason Reifler. 2010. When Corrections Fail: The Persistence of Political Misperceptions. Political Behavior 32:303-330.

Prasad, Monica, et al. 2009. “There Must Be a Reason”: Osama, Saddam, and Inferred Justification. Sociological Inquiry 79:142-162.

Redlawsk, David P., Andrew J.W. Civettini, and Karen M. Emmerson. 2010. The Affective Tipping Point: Do Motivated Reasoners Ever ‘Get It’? Political Psychology 31:563-593.

Public Understanding of Science

Not only do science writers need to know something about their subject matter and how to describe it in truthful and interesting ways, but they need to know who needs to hear or read or watch the story. Writing is always a two-way process. When we are beginning as writers we tend to think one-sidedly, only about what is inside our own minds and our own words. But part of our growth as writers is to think more about the people on the other side—our readers, our audience.

Why is audience important? The usual answer is that science knowledge is important to the audience—they need to know and understand the information being communicated.

Matthew Nisbet, a professor of communication at American University, classifies dimensions of science knowledge.

1. Practical or utilitarian: It is often stated that science in everyday life is invisible, taken for granted. But science knowledge is used daily when you make decisions, like fixing your car, interpreting packaging on food, what to wear for the weather. Making such decisions might require a limited knowledge of basic scientific terms, concepts, and facts.

2. Then there is civic or democratic knowledge, sufficient to make sense of a news report, or interpret competing arguments about a policy decision. The public is often asked to make decisions about new technologies that could have far-reaching effects, both on its own wellbeing and on the rest of the world. To make these decisions, people need knowledge so that they can reason well about issues involving science.

3. Nisbet’s third type of understanding is institutional, about the politics and workings of science: who funds it, how is it regulated, etc. This level of understanding also means a capacity to distinguish science from pseudoscience—to know how science works. Maine’s Governor LePage has said he won’t remove rules that are based on science. But how will we know if a rule is “science-based” or not?

All of these theories about scientific literacy and public understanding are based on the idea of a gap between science and the people who need the knowledge that science provides. Here’s a representation of what that gap might look like (thanks to Rob Helpy-Chalk):


Scientists communicate to each other and share knowledge through presentations and publications. The public, the ultimate target audience or the users of the information, could be policy makers, town officials, citizens. The gap between these two realms is well-accepted and often mentioned in conversations about science communication. But rather than accepting the gap, take a closer look. Is it real? Where did it come from?

Bernadette Bensaude-Vincent (2002) pointed out that the gap between scientists and the public is ancient and originated in the different requirements of theoretical and practical knowledge. In ancient times, however, both kinds of knowledge were valued, and it was not expected that ordinary citizens should become like philosophers or naturalists (the predecessors of today’s scientists). For centuries, it was thought and language only that separated them. Members of the public with an interest in science were encouraged to interact with scientists. Over time, as scientists became more professional and more specialized (think quantum physics),the enlightened public of amateurs, a term that still retained a strong positive connotation in the nineteenth century, was transformed into a “mass of gullible, irrational and ignorant people” in the twentieth century… In a relatively short period of time, public knowledge became irrelevant and scientists held a monopoly on legitimate knowledge.

In industrializing nations such as the U.S., science was idealized as the preferred route to economic expansion and social emancipation. The more citizens knew about science, the more they would support this view. As Boyce Rensberger has pointed out, the work of most science reporters in those days consisted largely of translating scientific jargon and explaining the statements of scientists and medical leaders. In the 1930s and ‘40s, science journalists believed that it was their job to persuade the public to accept science as the [economic] salvation of society.

So what have we learned? Does the American public understand and “accept” science?

The National Science Foundation surveys public attitudes and understanding of science every two years, and for several decades Americans have been asked the same series of true-false questions. The number of correct answers to these questions has remained flat—the average American adult does not “know” any more “science” today than he or she did twenty years ago.


Only 51% of Americans knew that electrons are smaller than atoms. One-quarter of Americans don’t know that the Earth revolves around the sun. And 47% believe that human beings developed from earlier species of animals. Four out of five Americans do not understand the concept of a scientific study (Miller 2004).

But Americans are not necessarily smarter about other topics, and even scientists get many of these questions wrong (Stocklmayer and Bryant 2011). As many have pointed out, including Cornelia Dean and Jon Miller, most people leave science behind when they graduate high school, and the science we consider as citizens is not the facts collected in textbooks, but science that will not occur for another twenty years. The science we consider as citizens is more recent, unfolding every day.

So where do people get their information? How is the knowledge gap being so unsuccessfully filled?

According to the Pew Research Center for People and the Press, the Internet is slowly closing in on television as Americans’ main source of news. Television remains the most widely used source for national and international news  but, the percentage saying they regularly watch local TV news has dipped below 50% for the first time (48%).


Another Pew study found that the days of loyalty to a particular news organization on a particular piece of technology in a particular form are gone. The overwhelming majority of Americans (92%) use multiple platforms to get news on a typical day, including national TV, local TV, the internet, local newspapers, radio, and national newspapers. Some 46% of Americans say they get news from four to six media platforms on a typical day. Just 7% get their news from a single media platform on a typical day, mostly older, well educated, upper middle class whites (Purcell et al. 2010).

Yet more evidence has emerged that newspapers (whether accessed in print or digitally) are the primary source people turn to for news about government and civic affairs. Nearly three quarters (72%) of adults are quite attached to following local news and information, and local newspapers are by far the source they rely on for much of the local information they need (Miller et al. 2012).

Online and digital news consumption, meanwhile, continues to increase, with many more people now getting news on cell phones, tablets or other mobile platforms. And perhaps the most dramatic change in the news environment has been the rise of social networking sites. The percentage of Americans saying they saw news or news headlines on a social networking site yesterday has doubled – from 9% to 19% – since 2010. Among adults younger than age 30, as many saw news on a social networking site the previous day (33%) as saw any television news (34%), with just 13% having read a newspaper either in print or digital form (Pew Research Center 2012).

The social media trends may mean that the 44% of adults who don’t follow the news regularly may be getting information via social media and other online sources.

What about science news specifically? Sources for science news parallel the general news findings from the Pew studies, with the Internet surpassing television as the dominant source for science and technology news. When it comes to specific scientific issues, more people turn to the Internet.


The most popular online news subjects are the weather (followed by 81% of internet news users), national events (73%), health and medicine (66%), business and the economy (64%), international events (62%), and… science and technology (60%).

Slide27And people say they want more coverage of science. Asked what subjects they would like to receive more coverage, 44% said scientific news and discoveries (Horrigan 2006).

A study of the New York Times most-emailed articles in 2009 found that readers preferred e-mailing articles with a positive theme, including long articles on intellectually challenging subjects. They shared stories that inspired awe, including science stories (Tierney 2010).

So, we know that people want science-based information, that they actively seek it, and they aren’t necessarily deterred by length or complexity.

How skillfully or how often Americans engage in the search for scientific information, whether on the Internet or elsewhere, remains unknown. In a January 4, 2013 commentary in Science, Dominique Brossard and Dietram Scheufele note that among the U.S. public, time spent on the World Wide Web has been linked to more positive attitudes toward science. Online science sources may be helping to narrow knowledge gaps caused partly by science coverage in traditional media that tends to be tailored to highly educated audiences. Yet one of the challenges of the current situation is the sheer volume of information available on the Internet.  The social environment of the web influences the context for science stories. Just the tone of the comments following balanced science stories can significantly alter how audiences think about the subject matter.


Bensaude-Vincent, B. 2002. A genealogy of the increasing gap between science and the public. Public Understanding of Science 10:99–113.

Allum, N., P. Sturgis, D. Tabourazi and I. Brunton-Smith. 2008. Science knowledge and attitudes across cultures: a meta-analysis. Public Understanding of Science 17: 35.

Horrigan, J.B. 2006. The Internet as a resource for news and information about science. Pew Internet and American Life Project.

Inglehart, R. 1990. Culture Shift in Advanced Societies. Princeton: Princeton University Press.

Miller, C., K. Purcell, and T. Rosenstiel. 2012. 72% of Americans follow local news closely. Pew Research Center.

Miller, J. 2004. Public understanding of, and attitudes toward, scientific research: what we know and what we need to know. Public Understanding of Science 13:273-294. Jon D. Miller has been studying public interactions with science for more than 20 years. A recent summary of his work can be found in Science and the Media, a report from the American Academy of Arts and Sciences.

Nisbet, M. 2005. The multiple meanings of public understanding. Committee for Skeptical Inquiry.

Pew Research Center for People and the Press. 2012. Trends in News Consumption: 1991-2012.

Purcell, K., L. Rainie, A. Mitchell, T. Rosenstiel, and K. Olmstead. 2010. Understanding the participatory news consumer. Pew Research Center.

Stocklmayer, S.M., and C. Bryant. 2011. Science and the public—what should people know? International Journal of Science Education, Part B: Communication and Public Engagement 2:81-101.

Science Stereotypes

“Scientific discoveries are made by people; they don’t just happen,” wrote Ruth Levy Guyer in our textbook. Who are these people?

madscientistThe scientist is often portrayed as an isolated man in a laboratory driven by insanity, greed, or selfishness. He’s the awkward nerd spewing physics jargon in the face of a pretty girl. He’s the scatterbrained teacher who accidentally creates a monster or invents a miracle. She’s the hysterical woman whose message will not be heard. How can writers make scientist characters more accurate, diverse, interesting, and effective? Why should writers care about the truthfulness of their real or imagined scientists? One of ways to improve the coverage of science in the news media is to focus stories on how science works. And one of the best ways to illustrate how science works is to show scientists doing science. This approach also appeals to the human desire for narrative structure, for a protagonist, for action in stories.

The Harris Poll consistently finds scientists near the top of the “most prestigious occupations,” after firefighters and above doctors, nurses, teachers, and military officers. The percentage of Americans who say scientists are ‘odd and peculiar’ has dropped, although one-quarter still agree (Losh 2010).

So why do scientists have such a bad image?

According to Chris Mooney and Sheril Kirshenbaum in their book Unscientific America, there’s something about scientists that triggers a particular kind of stereotyping, and that this reflects our society’s uneasiness with the power they can sometimes wield.

As Stephen Shapin recently noted, “the modern American scientist is held in some esteem, valued as a useful sort of person, but there is little understanding of what it might be to engage in scientific inquiry for its own sake and little evident approval of such a thing.”

alchemistRoslynn Haynes, a professor of English in Australia, identified seven primary stereotypes of scientists. According to Haynes, “the master narrative of the scientist is of an evil maniac and a dangerous man.” These stereotypes provide a useful framework for thinking about humanizing science writing.

The evil alchemist. Alchemy began with metalworkers in Egypt. When it was translated from Arabic writings to medieval Europe, it was associated with heresy and the black arts: the sinister magician, the devil’s worker, illegal, proud, arrogant, secretive, power-hungry. Dr. Faustus and Victor Frankenstein continue to provide metaphors for modern, cutting-edge research. The alchemist appears in plots that depend on the supernatural and the paranormal—stories in which the credulous believer is always right and the scientist-skeptic is always wrong.

spockThe noble scientist. The first literary work to depict scientists in a positive light was Sir Francis Bacon’s utopian vision, New Atlantis (1627), which depicted the scientist as an altruistic idealist. Star Trek’s rational Mr. Spock brings order that often saves the Enterprise. Dennis Quaid’s character in The Day After Tomorrow advocates on behalf of all the ignorant people, risks his own life to save those caught in an unprecedented, climate change-driven storm. Bad Science author Ben Goldacre is critical of such portraits of scientists: “The media work around their inability to deliver scientific evidence by using authority figures, the very antithesis of what science is about, as if they were priests, or politicians, or parent figures. ‘Scientists today said…scientists revealed…scientists warned.’ And if they want balance, you’ll get two scientists disagreeing, although with no explanation of why (scientists are ‘divided’). One scientist will ‘reveal’ something, and then another will ‘challenge’ it. A bit like Jedi knights. The danger of authority figure coverage, in the absence of real evidence, is that it leaves the field wide open for questionable authority figures to waltz in.”

profThe foolish scientist (a.k.a. absent minded professor). Satires depict scientists as foolish, cultish, comic. The laughable, lovable absent minded professor clumsily creates flubber and wins hearts with his eccentricity. “Scientists are unusually insightful and intuitive people who have a strong need for organization and application of concepts. They might have difficulty expressing their ideas, because they don’t think linearly, and their life of the mind could lead others to regard them as aloof” wrote Steve Bunk. Gary Larsen loved the foolish scientist. I like to think his mocking comes from a place of affection and respect.

The inhuman researcher. As science and society evolved, so did science stereotypes. Dr. Frankenstein fits here, too, as do the atomic scientists working on the bomb during World War II. Then the Cold War gave added weight to this stereotype with their documented declarations of unconcern about the human cost of their inventions. What does it mean when we apply this metaphor to others, as in the “Frankenstein economy” of the 2008 financial meltdown on Wall Street?

The scientist as adventurer (e.g., Indiana Jones), as brave, optimistic explorer, traveler of space and time. Jon Palfreman (Nieman Reports 2002) sees television science documentaries as being drawn from a small handful of approved genres, one of which is “archaeology and legends: expeditions, lost treasures, mummies, dinosaur bones, mammoths, the use of forensic methods to uncover the past.” The other genres he identified are “forces of nature,” “modern history,” and “boys and their toys.”

farmersThe mad, bad, dangerous scientist. As science increased in power, so did the stereotypes, evolving from the alchemist tradition to the cataclysmic. In Unscientific America, Chris Mooney and Sheril Kirshenbaum wrote, “The uncaring scientist, unconcerned about consequences, pursuing knowledge at all costs—this is the ugliest scientist stereotype, and also the most deeply rooted. It hails from a long literary tradition, before Frankenstein to Greek stories that depict the search for knowledge as forbidden and dangerous, and leading to disastrous consequences. In this narrative, knowledge leads the scientist to play God, interfere with nature, and attempt to thwart fate by determining who lives and who dies.” The mad scientist can be easily exposed as a wannabe, a fraud, a rogue.

The helpless scientist. The seventh and last stereotype identified by Haynes is the scientist who becomes a victim of his or her own discovery. This is science out of control.

How pervasive are these stereotypes?

bigbangIn the Big Bang Theory, scientists are unattractive, socially inept, indifferent, uncool, but smart. Scientists are distant, long-winded, incomprehensible. The lead scientist in Bones, while female and attractive, adheres to the stereotypes of rationality (to a fault), atheism, and a cold lack of emotion. In fact, the cracks that emerge in this façade are a major plot that runs through the series.

Movies and television portray scientists that are absorbed in the details of their work, ‘wedded to the job.’ A a recent analysis of scientist portrayals on TV (Dudo 2010) found that of 2,868 characters, one percent were portrayed as scientists, mostly white males. Scientists are more likely to be characterized as good, although science as an activity is portrayed as dangerous and violent.

As David Kirby described in the 2011 book Lab Coats in Hollywood, filmmakers are not unaware of these stereotypes.  Science consultants are frequently brought in to comment on scientific matters involving the script, the actors, the sets, the props, and any other relevant factor during production. Concerns about science in the movies has led several scientific advocacy organizations to develop programs to facilitate more scientific involvement in the production of television programs and films, including the National Academy of Science’s Science & Entertainment Exchange, the Creative Science Studio, and the Sloan Foundation’s Film Development program.

Why should any of this matter? As Kirby wrote, “Popular films impact scientific culture by effecting public controversies, enhancing funding opportunities, promoting research agendas, and stimulating the public into political action…Moreover, entertainment texts can influence scientific thought by foregrounding specific scientific ideas and providing narrative reasons to accept them as representing them as reality.”

As science writers, we have an obligation to write truthfully about science, and to portray scientists not as stereotypes, but as real people, just like you and me.


Steve Bunk, “The Natural History of Science Personalities” in Science Writers Spring 2003.

Anthony Dudo, 2010. Science on television in the 21st century: recent trends in portrayals and their contributions to public attitudes toward science. Comm. Res.

Ben Goldacre, www.badscience.net

Roslynn Haynes, From alchemy to artifical intelligence: stereotypes of the scientist in Western literature. Public Understanding of Science 12 (2003):243-253.

Kirby, David A., 2011. Lab Coats in Hollywood. Cambridge, MA: MIT Press.

Losh, Susan Carol. 2010. Stereotypes about scientists over time among US adults: 1983 and 2001. PuoS 19:372-382.

Chris Mooney and Sheril Kirshenbaum, Unscientific America (see Chapter 7).

Jon Palfreman, “Bringing science to a television audience” Nieman Reports 2002.