How can we judge health and science information?
As with media, this ranges from the basic aspects of how scientists form and test hypotheses (summarized at the beginning of this section) to the mechanics of how health and science research is funded, reviewed, published, disseminated and covered by journalists. Some educators have defined these skills as “scientific literacy”" but many of them are very similar to those we associate with media and digital literacy. For example Paul Hurd, who has popularized the term scientific literacy, describes a scientifically literate person as someone who, among other skills and habits:
- Distinguishes experts from the uninformed
- Distinguishes theory from dogma, and data from myth and folklore
- Recognizes that almost every facet of one’s life has been influenced in one way or another by science/technology
- Distinguishes science from pseudo-science such as astrology, quackery, the occult and superstition
- Distinguishes evidence from propaganda, fact from fiction, sense from nonsense, and knowledge from opinion
- Knows how to analyze and process information to generate knowledge that extends beyond facts
- Recognizes when one does not have enough data to make a rational decision or form a reliable judgment
- Recognizes that scientific literacy is a process of acquiring, analyzing, synthesizing, coding, evaluating and utilizing achievements in science and technology in human and social contexts
- Recognizes the symbiotic relationships between science and technology and between science, technology and human affairs[1]
This section will describe four key ways of achieving these goals: being an informed reader of science news; identifying scientific consensus on an issue; evaluating the authority of a source (either an organization or an individual) making a claim; and performing specialized searches that only deliver results from authoritative sources.
Being an Informed Reader
The biggest area of overlap between media literacy and scientific literacy is the importance of understanding how scientific work gets communicated and eventually reported as news. Science writer Emily Willingham gives five tips for reading science news:
- Go beyond the headline: Headlines, which are often written by people other than the person who wrote the article, simplify issues, emphasize the most sensational aspects of a story – which may not be the most important – and can be misleading. Read the article to get the whole story.
- Look for the basis of the article: Is it about new research, or connecting existing research to a new story? Read through the story to find out what it’s based on, and watch out for words like “review,” “perspective,” or “commentary” – these usually mean no original research has been done. As well, don’t put too much stock in research announced at conferences, as this hasn’t usually been through the peer-review process.
- Look at what the research is measuring: Is it an “association,” a “risk,” a “link” or a “correlation”? Any of those words suggest that we can’t draw any real conclusions yet, because they only mean there is a relationship between two or things. Sometimes, of course, correlation (the relationship between two events) does show causation (where one event causes the other), but usually the first findings that make the news haven’t shown that.
- Look at the original source: Does the story have a link to the original research? If so, follow it so you can find out where it was published. If not, do a search for the title of the paper, or if you don’t have that, search for the authors and subject of the research. If the research was published in a scientific journal, you could do a Google search for the journal’s name plus the words “impact factor” to see if other scientists draw on it (anything above a 1 shows that the source is a part of the scientific community; if the search doesn’t show an impact factor at all, that means the source isn’t considered an academic journal).
- Keep commercial considerations in mind: News outlets want readers; researchers need funding; the institutions that fund them often want publicity. Even when a science story is entirely legitimate, and not someone trying to scare you into buying a book or some nutritional supplements, there are lots of reasons why the importance or implications of new research might be exaggerated by anyone in that chain.[2]
Identifying Consensus
As noted above, one of the challenges of understanding health and science topics is that both news and social networks tend to be biased towards what is new, such as recent findings or discoveries. This can make it harder to gauge the reliability of health and science information because by definition these new findings – what is sometimes called “science-in-the-making” – don’t necessarily reflect the consensus view of the scientists in the field. [3] This bias towards new findings can contribute to the misunderstanding of science as an individual rather than a collective process and can exaggerate the extent to which scientists disagree on a topic.[4] Groups or individuals may take advantage of this to make it seem as though there is not a consensus in a field where one has been reached, or that a consensus has been reached where one has not been. But as Kolstø puts it, “most science teachers and science educators are rarely convinced at first glance when new results are emanating from the frontier of science. We gladly read about new and exciting findings, but keep all possibilities open – for the time being. The reason is that we first want to know if there is some sort of consensus among the relevant experts concerning these new scientific knowledge claims.”[5]
It’s important to understand not just that the scientific consensus can change, but how it happens, to avoid the misconception that everything scientists believe now will eventually be overturned. Instead, as Caulfield puts it, “as a fact-checker, your job is not to resolve debates based on new evidence, but to accurately summarize the state of research and the consensus of experts in a given area, taking into account majority and significant minority views.”[6]
When looking for a consensus, an encyclopedia is actually a good place to start. At the time this article was written, for example, the Wikipedia articles on global warming, the supposed link between vaccines and autism, and the possible health effects of cellphone radiation all accurately reflect the scientific consensus. (Although Wikipedia articles can be changed by users, automated editors now catch and quickly correct most cases of vandalism or ideologically motivated edits: as a result, the more controversial articles are actually more likely to be accurate.[7] See our handout Wikipedia 101 for more details on how to tell if a specific Wikipedia article is reliable.)
Specialized Sources
Another strategy is to take the opposite approach and confine your search to sources that you already know are reliable and reflect the scientific consensus. This requires a bit more expert knowledge but can save you a lot of time and can prevent you from (consciously or unconsciously) seeking out information that you want to be true or that supports what you already believe. For example, the Health on the Net Foundation offers Honsearch, a dedicated search engine that only searches sites it has certified, while WorldWideScience is an international collaboration that searches scientific sites and databases from around the world. Specific resources for debunking possible misinformation in health and science include Quackwatch, which focuses on health; the McGill Office for Science and Society, which is dedicated to “separating sense from nonsense” in science; and the Snopes sections devoted to health, science and medicine.
Evaluating Authority
While it’s not always possible to judge the reliability of a specific scientific or medical claim, it is usually possible to judge the authority of the person making it. As with our judgments of claims, though, our judgments of authority are often influenced by whether or not we agree with the supposed expert: one study found that people rated the authority of experts with the same credentials differently based on whether or not the expert’s opinion matched their own.[8] At the same time, promoters of misinformation rely on supposed experts to bolster the credibility of their claims.[9] So how do we determine if someone really is an authority? How can we tell the difference between someone who may have genuine data challenging the scientific consensus and a crank with no standing in the scientific community?
It’s important to understand that being an “expert” may be limited to having expertise on a very specific subject. The popular distinction between scientists and non-scientists can obscure the fact that most scientists have little specialized knowledge outside of their area: while physicists may know more about biology than someone without any scientific education, for example, they are unlikely to know enough to make or judge claims in that field.
The first step, then, is to make sure that an expert’s credentials are actually in the relevant area. This may require a bit of additional research: for example Dr. Joseph Mercola, whose website promotes (and sells) supplements both for nutrition and as an alternative to vaccines, has a doctorate in osteopathy, a branch of medicine that focuses on the muscles and skeleton – not nutrition or the immune system.
If the person claiming authority is a scientist, you can also find out if they have a publication history in the field by doing a search for their name on Google Scholar. For instance the global warming denialist blog Principia Scientific ran a story about a paper by Christopher Booker that claims the scientific consensus was a result not of evidence but of “groupthink;”[10] Google Scholar shows that Booker has published works promoting conspiracy theories about climate change, wind power and the European Union, but no scientific work on these or any other topics.
As well, an expert’s authority can be compromised if there is reason to think their judgment isn’t objective. It’s important to make sure the authority isn’t compromised for ideological or commercial reasons, so find out who is paying for their platform. If the funder has political or financial reasons for wanting you to believe the expert – or if who is funding it isn’t clear – you should treat it with additional skepticism.[11]
You can use similar methods to determine if an organization or publication is an authority. As with other types of information, you first need to “go upstream” to determine where a claim was first published. Once you’ve determined its origin, you can do a search with the methods listed above and in other sections to see if the source is well-regarded and if there is any reason to consider it biased. For example, the American Academy of Pediatrics and the American College of Pediatricians both represent themselves as authoritative bodies on pediatrics, but a few moments’ research shows that the AAP has 66,000 members while the ACP has only 500.
For scientific articles you can also find out the journal’s impact factor (see “Being an Informed Reader” above) to see if other scientists draw on it, and do a search for the title of the paper with the words “replicated” or “retracted” to see what other scientists have found about it. For example, a search for “Wakefield autism retracted” will show that Lancet, the journal that originally published the article linking the MMR vaccine with autism, eventually retracted the article, describing it as an “elaborate fraud.”[12]
Putting it All Together
We can use what we’ve learned in this section to examine the claim at the very beginning of it, that readers can be “inoculated” against scientific misinformation by being taught to recognize it. Depending on where we read the claim, we might have to track down the original article: entering the terms “inoculated,” “science” and “misinformation” leads us to the Plos One article cited on that page. A search for “Plos One impact factor” gives us a result of 2.806 – not as high as some sources, but enough to show that its articles are used by other scientists. For simplicity’s sake we’ll pick among the four authors the one who is least likely to share a name with other scientists, Stephan Lewandowsky (one of the study’s other authors, for example, is named John Cook, but the odds that there are other John Cooks in other fields are higher) and search for his name on Google Scholar. This gives us more than 2,000 results in a variety of journals dedicated either to psychology or general science, enough to be sure that, in contrast to someone like Christopher Booker, he is a scientist who is working and publishing in his field. We can also do a Google Scholar search for the article to see if others have cited it and, if so, where: this gives us 25 citations in journals ranging from Nature to the Journal of Applied Research in Memory and Cognition, whose impact factors we can also determine if we want to double-check (40.137 and 2.85 respectively). A search finds that no one has yet attempted to replicate the paper’s findings (not surprisingly, since it’s quite recent).
It’s important to note that none of this necessarily tells us that this claim is correct, only that the people who made it are genuine authorities in their field, that it’s gone through the peer-review process to ensure that the work that produced it met the standards of the field, and that other scholars in that field consider it reasonably valuable. As a result, we can consider it provisionally true until other work supports or disproves it, but don’t currently have any reason to doubt it.
[1] Hurd, P. D. (1998). Scientific literacy: New minds for a changing world. Science Education, 82(3), 407-416. doi:10.1002/(sici)1098-237x(199806)82:33.3.co;2-q
[2] Willingham, E. (2012, April 27). Science, health, medical news freaking you out? Do the Double X Double-Take first. Retrieved March 26, 2018, from http://www.doublexscience.com/2012/04/science-health-medical-news-freaking.html
[3] Kolstø, S. D. (2001). Scientific literacy for citizenship: Tools for dealing with the science dimension of controversial socioscientific issues. Science Education, 85(3), 291-310. doi:10.1002/sce.1011
[4] Caulfield, M. (2017). "How to Think About Research" in Web Literacy for Student Fact-Checkers. < https://webliteracy.pressbooks.com/>
[5] Kolstø, S. D. (2001). Scientific literacy for citizenship: Tools for dealing with the science dimension of controversial socioscientific issues. Science Education, 85(3), 291-310. doi:10.1002/sce.1011
[6] Caulfield, M. (2017). "How to Think About Research" in Web Literacy for Student Fact-Checkers. < https://webliteracy.pressbooks.com/>
[7] Cohen, N. (2009, June 8). The Wars of Words on WIkipedia's Outskirts. The New York Times.
[8] Mooney, C. (2011, May/June). The Science of Why We Don't Believe Science. Mother Jones. Retrieved March 29, 2018, from https://www.motherjones.com/politics/2011/04/denial-science-chris-mooney/
[9] Ellis E.G. (2017, May 31.) To Make Your Conspiracy Theory Legit, Just Find an ‘Expert.’ Wired. Retrieved March 26, 2018, from https://www.wired.com/2017/05/conspiracy-theory-experts/
[10] Leuck D. (2018, March 22. Global Warming: The Evolution of a Hoax. Principia Scientific. <https://principia-scientific.org/global-warming-the-evolution-of-a-hoax/>
[11] Massicotte, A. (2015). When to trust health information posted on the Internet. Canadian Pharmacists Journal / Revue Des Pharmaciens Du Canada,148(2), 61-63. doi:10.1177/1715163515569212
[12] No author listed. (2011, January 5.) Retracted autism study an ‘elaborate fraud,’ British journal finds. CNN.