Political Disinformation

While it’s important to be skeptical of political news, especially during an election, it’s also important to be able to recognize and dismiss outright disinformation: the deliberate spreading of false or misleading information. The content of political disinformation spans a wide spectrum, from stories that might be credible (such as an endorsement of a politician from a surprising source) to those that are utterly unbelievable (such as the accusation that a candidate for national office is involved in a child-exploitation ring housed in the basement of a pizza parlour). Those spreading disinformation can include governments, political activists and even for-profit publishers (some of whom run multiple disinformation operations that cater to different parts of the political spectrum).[1]

Though there’s debate over how many voters are actually reached by political disinformation,[2] the ability of social networks and search engines to deliver targeted advertising means that it’s much more likely to reach the people who are likely to be swayed by it: “when disinformation operators leverage this system for precision propaganda, the harm to the public interest, the political culture, and the integrity of democracy is substantial and distinct from any other type of advertiser.”[3] An analysis of the threats of online misinformation to democracy in Canada suggests that disinformation agents “could use social media to spread lies and propaganda to a mass audience at a low cost [or] masquerade as legitimate information providers, blurring the line between what is real and what is disinformation.”[4]

There are four main ways in which disinformation can affect the outcome of an election:

  1. Reaching an audience that’s vulnerable to the message: Because of the effect our beliefs have on our thinking, targeted advertising works even better for political messages than traditional commercial ones. Political parties and disinformation agents can both draw on the data profiles provided by social networks, search engines and ad brokers to deliver their messages both to those who are known to hold those beliefs and to “lookalike audiences” that resemble them closely enough that they are highly likely to believe the same things: “the more disinformation operators know about their target audiences, the easier it is to find, manipulate, and deceive them.”[5] Though these paid ads may reach relatively few people, their strongly ideological content can lead to them being widely shared by those who receive them, which in turn leads social networks’ algorithms to boost the stories in their friends’ feeds. Even the most outlandish stories benefit from this process: because those who believe in one conspiracy theory are likely to believe in others, finding voters who already subscribe to at least one such theory is crucial to spreading a new one.[6]
  2. Energizing or suppressing voters: While political disinformation may not convert anyone to an opposing point of view, it may have a powerful impact on whether or not someone actually votes. While ads with extreme messages were once seen as risky because they might alienate more moderate voters, targeted ads make it possible to send extreme messages just to hard-core supporters – or, perhaps more importantly, to send messages to their opponent’s likely supporters to discourage them from voting. (For example, while committed Trump supporters were shown ads that called Hillary Clinton a traitor, likely Clinton voters received ads highlighting comments she’d made referring to some African-American men as “super predators” in order to make them conflicted about voting for her.)[7] These “dark ads” are visible only to the ad buyer, the recipients and the social network, which makes it almost impossible for an opponent – or government bodies tasked with overseeing an election – to track or respond to them. [8]
    As well, because these ads are shared between supporters, they can influence the views of an entire community. There is an escalating effect as members avidly share increasingly extreme messages in order to earn endorsements and shares from their friends and reflect positively on themselves in the process. A particularly striking example of this is found among those who share images of food online, where a mix of social network algorithms and human desires for attention and prestige have led people to post pictures of hamburgers too big to eat or pizzas with tacos on them: “When someone wanted to broaden out beyond his or her immediate social networks, one of the most effective ways to achieve mass appeal turned out to be by turning to the extreme.”[9]
  3. Setting the agenda during breaking news: News has been described as “the first rough draft of history,”[10] but increasingly it is the final draft as well: once a false or inaccurate story has been widely spread, people are likely to believe it over a new one – even if the new story is a retraction from the same source as the original.[11] Disinformation agents have a variety of techniques for getting their version of a story spread first, many of which involve “gaming” the algorithms of search engines and social networks. These include techniques such as having automated “bot” accounts spread a story (particularly with a shared hashtag, to push it to a “trending” level)[12] or posting a web address hundreds of times on a forum site such as Reddit, to boost its place in search engine results.[13] Securing a top spot in search engine results is a particularly important goal since many users wrongly assume that search engines are neutral arbiters of reliability,[14] and (perhaps because of this) the top five search results on any given topic get three-quarters of the clicks.[15] For example, search results following the mass shooting in Las Vegas were dominated by conspiracy theories,[16] and a tweet of a doctored Buzzfeed article spread by the blog Gateway Pundit that supposedly called for all guns owned by White people to be confiscated was widely circulated following the school shooting in Parkland, Florida.[17] Although stories like these are usually quickly debunked (and sometimes retracted), there’s evidence that under certain conditions debunking can backfire.[18] Additionally, by the time a story is debunked or retracted, it may have been already widely spread through social networks.[19] As Dartmouth Professor Dr. Brendan Nyhan puts it, “misinformation spreads widely before it can be downgraded in the algorithms.”[20] Following the top search results, especially when starting with search terms that are associated with extreme political views, can lead a searcher down a “rabbit hole” into a universe of conspiracy theories, as apparently happened to Dylann Roof, who murdered nine people at a church in South Carolina after searching for the terms “black on white crime.”[21]
  4. “Infecting” legitimate news: As noted previously, legitimate news sources can be led by bias – their own or their perception of their audience’s – to give too much credence to a plausible but not confirmed story. For example, during the 2016 U.S. election, a YouTube video suggesting that Hillary Clinton had unrevealed health problems led to the appearance of #hillaryshealth as a trending topic on Twitter, which led to it being seen as being more newsworthy by journalists (though no new information had appeared).[22] This phenomenon provides a vehicle for a small number of highly committed, reasonably savvy operators – freelance or state-sponsored – to have a hugely disproportionate effect on the public conversation, especially during times such as elections when news outlets are constantly looking for fresh and engaging content. There can be harmful effects even when the disinformation being spread is not entirely false: as Gilad Lotan writes of his study of social networks during the Israel-Gaza conflict, “neither side… was false per se. Rather, each carefully crafted story on either side omitted important detail and context. When this happens constantly, on a daily basis, it systematically and deeply affects people’s perception of what is real.”[23]

 


[1] Marwick, Alice and Rebecca Lewis. Media Manipulation and Disinformation Online. Data & Society, May 1,5 2017.
[2] Nyhan, Brendan. “Fake News and Bots May be Worrisome, but Their Political Power is Overblown.” New York Times, February 1,3 2018.
[3] Ghosh, Dipayan and Ben Scott. #DigitalDeceit: The Technologies Behind Precision Propaganda on the Internet. New America, January 2018.
[4] “Cyber Threats to Canada’s Democratic Process.” Communications Security Establishment, June 16, 2017.
[5] Ghosh, Dipayan and Ben Scott. #DigitalDeceit: The Technologies Behind Precision Propaganda on the Internet. New America, January 2018.
[6] Dorfman, Matt. “Why Rational People Buy Into Conspiracy Theories.” New York Times, May 21, 2013.
[7] Green, Joshua, and Sasha Issenberg. “Inside the Trump Bunker, With Days to Go.” Bloomberg, October 27, 2016.
[8] Madrigal, Alexis. “What Facebook Did to American Democracy.” The Atlantic, October 12, 2017.
[9] Kozinets, Robert. “How social media fires people’s passions and builds extremist divisions.” The Conversation, November 13 2017.
[10] Fitch, George. “The Educational Value of ‘News.’” The State, December 1905.
[11] Stephan Lewandowsky, Werner G.K. Stritzke, Klaus Oberauer, Michael Morales. Memory for Fact, Fiction, and Misinformation: The Iraq War 2003. Psychological Science, Vol 16, Issue 3, March 1, 2005.
[12] Lotan, Gilad. “Fake News Is Not the Only Problem.” Data & Society: Points, November 22, 2016.
[13] “This is how Redditors Manipulated Google's Image Search Engine”. (July 18, 2017. Retrieved April 13, 2018, from https://edgylabs.com/google-search-engine-influenced-reddit
[14] Searching for Dr. King: Teens, Race and Cloaked Websites. August 19, 2009.
[15] Lee, J. “Study Shows #1 on Google Equals 33% of Traffic”. June 20, 2013. Retrieved April 13, 2018, from https://searchenginewatch.com/sew/study/2276184/no-1-position-in-google-gets-33-of-search-traffic-study
[16] Search and social media was filled with clickbait and propaganda in the wake of Vegas shooting
[17] Funke, Daniel. “Imposter tweets made it even harder for a reporter to cover Florida school shooting.” Poynter.org, February 15, 2018.
[18] Fazio, Lisa. “Unbelievable news? Read it again and you might think it’s true.” The Conversation, December 5 2016.
[19] “Tech giants sorry for false news about Las Vegas gunman.” BBC News, October 3, 2017.
[20] Carey, Benedict. “How Fiction Becomes Fact on Social Media.” The New York Times, October 20, 2017.
[21] Pasquale, Frank. “From Holocaust Denial To Hitler Admiration: Google’s Algorithm Is Dangerous.” Huffington Post, February 26, 2017.
[22] Lotan, Gilad. “Fake News Is Not the Only Problem.” Data & Society: Points, November 22, 2016.
[23] Lotan, Gilad. “Fake News Is Not the Only Problem.” Data & Society: Points, November 22, 2016.