Understanding Mis-, Dis-, and Malinformation in Wake of the Elections

Effectively navigating the plethora of information presented to us every day via 24-hour news cycles, social media, influence from family, friends, and colleagues can be daunting. This is made more difficult by the intentional and unintentional spread of faulty or corrupted information, the possibility of confusing falsehood with fact, and threat actors deliberately manipulating narratives for strategic advantage. The risks associated with spreading and consuming inaccurate and distorted information negatively affects individuals and society alike. Moreover, the ease in which the spread of and belief in “bad” information can be consumed today is an inherently psychological and social issue that must be combatted on all fronts. From shaping democratic election results to impacting consumer behavior and supply and demand of essential products and services, at times it feels we are drowning in information, but left wanting for knowledge. This blog explores the strategies that individuals, governments, and businesses can use to combat manipulated, false, or faulty information.

First, we must identify what qualifies information as “bad”. The Cybersecurity and Infrastructure Security Agency (CISA) splits manipulated, false, or faulty information into three categories based on the intent or context of its use—misinformation, disinformation, and malinformation.

  • Misinformation is false, but not created or shared with the intention of causing harm.
  • Disinformation is deliberately created to mislead, harm, or manipulate a person, social group, organization, or country.
  • Malinformation is based on fact, but used out of context to mislead, harm, or manipulate.

CISA refers to instances of mis-, dis-, and malinformation (MDM) as “information activities.” The agency is specifically concerned with information activities perpetrated by “foreign and domestic threat actors…seeking to interfere with and undermine our democratic institutions and national cohesiveness.”1 Dr. Merten Reglitz, from the University of Birmingham, shares CISA’s concerns, saying that “the primary danger fake news poses to democratic values and institutions lies in the corrosive effect it has on trust among citizens and thus on citizens’ trust in their democracy.” A key part of our democratic values and institutions is citizen trust in free and fair elections, something that MDM can undermine despite no true risk.

Since MDM is an existential threat to democracy and democratic values, it is important to safeguard against it and its effects. We may not be able to prevent the creation and spread of MDM, as threat actors will always seek to cause damage or disruption, but we can mitigate or treat the symptoms and reduce the risk and impact. This can be accomplished through inoculation or “prebunking”. Persuasive inoculation works in a manner similar to a vaccine—preparing an individual to resist MDM by exposing them to a weakened, refuted form of the information. The process of inoculating or prebunking consists of two key parts:

  1. Forewarning an individual that they will be exposed to information that may go against their beliefs and informing them of the dangers posed by MDM.
  2. Providing examples of the position, belief, or argument represented in the specific MDM instance and directly refuting them.

This method can be easily adapted into videos, small games (such as the Go Viral! game intended to combat COVID-19 misinformation), trainings, or simple infographics, each with their own effectiveness and longevity (i.e., how long the inoculation’s effects last).

While inoculation theory has proven effective, we won’t always have the foresight, time, or resources available to properly prevent malicious information activities. We are then forced to react and mitigate risks where possible. A traditional method of responding to MDM is to debunk or fact-check the information. This consists of correcting an instance of MDM once it is released with facts or counter arguments. However, the Debunking Handbook 2020 notes that making corrections alone or marking information as questionable or untrustworthy is not enough to properly counter MDM. The handbook suggests using the following components in successful debunking:

  1. Fact: Lead with the fact if it’s clear, pithy, and sticky—make it simple, concrete, and plausible. It must “fit” with the story.
  2. Warn About the Myth: Warn beforehand that a myth is coming…mention it once only.
  3. Explain Fallacy: Explain how the myth misleads.
  4. Fact: Finish by reinforcing the fact—multiple times if possible. Make sure it provides an alternative causal explanation.6

Another method for countering misinformation is accuracy priming or nudging. Studies show that people don’t intend to spread misinformation but are distracted from or otherwise inattentive to the content’s accuracy. Accuracy priming consists of reminding an individual of or nudging them towards considering accuracy. This can be done by asking simple, non-critical questions, such as “How accurate is a headline?” or “Is that content true or false?”

None of these methods are foolproof, as each have their own drawbacks and benefits. However, research suggests that combining counter-MDM methods is more effective than any one method alone. While this blog briefly discusses the psychological and social strategies that can be used to combat MDM, implementation is always a difficult sticking point.

At OTHSolutions, we are proud to offer consulting and contracting support to agencies like CISA with the mission and vision to strengthen the election security process and combat mis-, dis-, and malinformation. As you prepare to vote in midterms tomorrow, consider MDM in terms of where you get the information you consume to make decisions, the intent of the information provider, and always be sure to do your own fact checking. To help understand and mitigate MDM check out the CISA MDM Resource Library and the Debunking Handbook 2020 document.

Other Recent News

Sorry, we couldn't find any posts. Please try a different search.