DHS Public-Private Analytic Exchange Program Report: Combatting Targeted Disinformation Campaigns A Whole-of-Society Issue Part Two August 2021

The following is the second in a series of reports on disinformation published by the Department of Homeland Security's Public-Private Analytic Exchange Program.  The first report in the series is also available:

 

DHS Public-Private Analytic Exchange Program Report: Combatting Targeted Disinformation Campaigns A Whole-of-Society Issue October 2019

Combatting Targeted Disinformation Campaigns: A Whole-of-Society Issue Part Two

Page Count: 48 pages
Date: August 2021
Restriction: None
Originating Organization: Department of Homeland Security Public-Private Analytic Exchange Program
File Type: pdf
File Size: 4,442,564 bytes
File Hash (SHA-256): 0CD27C5FF4F46CA25A38A26827A11B82ACE7C1A96E849C74AFC18CADD884C607


Download File

Recent events have demonstrated that targeted disinformation campaigns can have consequences that impact the lives and safety of information consumers. On social media platforms and in messaging apps, disinformation spread like a virus, infecting information consumers with contempt for democratic norms and intolerance of the views and actions of others. These events have highlighted the deep political and social divisions within the United States. Disinformation helped to ignite long-simmering anger, frustration, and resentment, resulting, at times, in acts of violence and other unlawful behavior.

All information consumers are vulnerable to being deceived by imposters, charlatans, hucksters, con men, and self-proclaimed experts. But ideological rigidity and intolerance of opposing views make information consumers especially vulnerable to such deception. In polarized environments, threat actors find ample opportunity to spread disinformation. Their voices are amplified by disgruntled audiences willing and sometimes eager to spread messages of discord.

In 2019, the Combatting Targeted Disinformation Campaigns team submitted the first of a two-part report on targeted disinformation campaigns. In the first report, we provided a broad overview of targeted disinformation campaigns, described how disinformation enters the information ecosystem, how threat actors exploit modern technology, and how information consumers wittingly or unwittingly contribute to the spread of disinformation. We then suggested ways to counter these campaigns. We concluded that, to combat these campaigns, a comprehensive solution involving many sectors of society and lines of efforts was required.

In our second paper, we expand on two themes explored in the first paper: 1) how to stem the supply of disinformation; and 2) how to reduce the demand for disinformation. In this paper, we recommend approaches that impact both supply and demand.
None of these approaches are new; nor are they decisive by themselves. When combined and implemented consistently, the sum is greater than the parts.

Disinformation should not be viewed as a problem to be solved, but as a condition to be treated. There is no cure. However, preventative and alleviatory measures can be taken.

Disinformation campaigns are threats to national security because they have a corrosive impact on democratic institutions and civil society in the United States. A healthy democracy depends on well-informed citizens, the competition of ideas, and the willingness to compromise. Disinformation campaigns undermine all three.

Disinformation campaigns have contributed measurably to divisions within U.S. society. Threat actors take advantage of these divisions and harness the power of social media platforms to spread disinformation to large audiences. At times, these disinformation campaigns influence the real-world behavior of information consumers. For example, disinformation about the 2020 presidential elections impacted the lead-up to the events of January 6, 2021 in Washington, D.C. Furthermore, disinformation about the COVID-19 vaccination program has undermined efforts to curb the virus and its variants. Disinformation also weakens international alliances and undermines U.S. attempts to project soft power abroad.

“Naming and shaming” is an approach to countering disinformation in which threat actors behind disinformation campaigns are publicly identified. When threat actors use fake personas to deceive, information consumers may decide to disregard the threat actor and the disinformation associated with that threat actor when the deception is uncovered. Once a threat actor is publicly identified, they will be shamed into altering course.

2.1.2. Identifying Domestic Actors

Since the 2016 U.S. presidential election and the emergence of the COVID19 pandemic, the focus on disinformation campaigns has shifted from foreign threat actors to domestic ones. On balance, disinformation campaigns originating from domestic actors are more enduring and damaging than those originating from foreign actors. But, when domestic actors are involved, even if co-opted by foreign actors, the factors that bear on the decision to identify domestic actors differ in important ways from those that apply to foreign actors.

Both private and public entities have legal obligations to protect information they collect about information consumers. This obligation depends on the type of information collected and how it was collected. Private entities generally have no legal obligation to protect information about information consumers with whom they have no fiduciary relationship, provided the information was gathered through licit means. Therefore, these private entities are free to publish this information when it suits their purpose. For example, in a report published in March 2021, the Center for Countering Digital Hate named twelve information consumers responsible for 73% of anti-COVID-19 vaccine content online.

The report included the name, photo, and screenshots of posts on Facebook and Twitter of each individual. The New York Times later published a more detailed article on the person listed in the report as the greatest offender.

Whether the identification of domestic threat actors by private entities helps to stem the spread of disinformation is unclear. While naming and shaming may be superficially appealing as a method of exacting a price from those who peddle in disinformation, such naming and shaming might make matters worse by provoking a fierce backlash from those sympathetic to the views of the information consumers identified. In the end, this further entrenches both sides in their respective ideological positions. Since polarization is a primary reason for the success of disinformation campaigns, attempting to counter these campaigns with methods which may generate even more polarization seems questionable. Also, revealing the identities of domestic threat actors and other personally identifiable information can render these information consumers vulnerable to harassment or more egregious forms of retaliation. We do not condone harassment and vigilantism as means of responding to domestic threat actors.

It is extremely improbable that disinformation campaigns will disappear in the foreseeable future. Despite the increasing effectiveness of countermeasures, threat actors will always find avenues to spread their disinformation and will adopt new tactics as new forms of technology emerge. Compared to many legitimate forms of persuasion and influence, disinformation campaigns are inexpensive and frequently have few downsides for the threat actor. Threat actors take ready advantage of software and communication platforms that they did not develop and benefit from political and social conditions that they did not create.

In this report, we concluded that disinformation campaigns are threats to national security because they undermine the well-being of our society. Though disinformation campaigns cannot be stopped fully, we believe that measures can be taken to impede these campaigns and reduce their impact. We believe that building the resilience of information consumers to disinformation will likely bear more fruit than focusing on technological solutions. To build such resilience, we believe that giving information consumers more tools with which they can verify the information they consume online, identify the threat actors behind disinformation campaigns, verify the claims of imposters hiding behind fake persona and credentials, and control their content feeds is essential.

Share this:

Facebooktwitterredditlinkedinmail