DHS Public-Private Analytic Exchange Program Report: Combatting Targeted Disinformation Campaigns A Whole-of-Society Issue October 2019

The following is the first in a series of reports on disinformation published by the Department of Homeland Security's Public-Private Analytic Exchange Program.  The second report in the series is also available:

 

DHS Public-Private Analytic Exchange Program Report: Combatting Targeted Disinformation Campaigns A Whole-of-Society Issue Part Two August 2021

Combatting Targeted Disinformation Campaigns: A Whole-of-Society Issue

Page Count: 28 pages
Date: October 2019
Restriction: None
Originating Organization: Department of Homeland Security Public-Private Analytic Exchange Program
File Type: pdf
File Size: 1,778,869 bytes
File Hash (SHA-256): ABDD9A450FB93E220D9361A9AEAD5D3DA4E531A5BE91FFCD22D514B7FADCF3A1


Download File

In today’s information environment, the way consumers view facts, define truth, and categorize various types of information does not adhere to traditional rules. The shift from print sources of information to online sources and the rise of social media have had a profound impact on how consumers access, process, and share information. These changes have made it easier for threat actors to spread disinformation and exploit the modern information environment, posing a significant threat to democratic societies. Accordingly, disinformation campaigns should be viewed as a whole-of-society problem requiring action by government stakeholders, commercial entities, media organizations, and other segments of civil society.

Before the 2016 U.S. presidential election, disinformation was not at the forefront of American discourse. U.S. government efforts in the disinformation arena had focused primarily on combating transnational terrorist organizations. Social media companies werejust becoming aware how their platforms empowered threat actors on a large scale. Mainstream media organizations were not yet plagued by accusations of spreading “fake news” and fears of concerted foreign efforts to undermine American society had not seeped into the consciousness of the general public.

Since the presidential election, disinformation campaigns have been the subject of numerous investigations, research projects, policy forums, congressional hearings and news reports. The end result has been a better understanding of the methods and motives of threat actors engaging in disinformation campaigns and the impact of these campaigns, which in turn has led to improved efforts to combat these campaigns and minimize the harm they cause.

Until the end of 2018, much of the work on disinformation campaigns was post-mortem—after the campaign had nearly run its course. At that point, the desired effect of the threat actor had been achieved and the damage done. Since late 2018, civil society groups, scholars, and investigative journalists have made great strides in identifying ongoing disinformation campaigns and sharing findings with social media platforms, who then remove inauthentic accounts. However, these campaigns are often identified after the disinformation has already entered and been amplified inside the information environment, too late to fully negate the harm.

The extent of private and public sector cooperation over the next five years to address targeted disinformation campaigns will determine the direction of the issue. We view this issue as a whole-ofsociety problem requiring a whole-of-society response. The purpose of this paper is to provide a framework for stakeholders to understand the lifecycle of disinformation campaigns, then to recommend a preliminary set of actions that may assist with the identification and neutralization of a disinformation campaign before disinformation is amplified within the information environment, thus mitigating its impact.

The framework recommends actions for a variety of stakeholders to combat targeted disinformation campaigns by neutralizing threat actors, bolstering social media technology to make it less susceptible to exploitation, and building public resilience in the face of disinformation.

We recommend:

• Support for government legislation promoting transparency and authenticity of online political content. We support passage of the Honest Ads Act, which would hold digital political advertising to the same disclosure requirements as those required for political advertisements on television, radio and print media.
• Funding and support of research efforts that bridge the commercial and academic sectors. Academic research efforts, armed with the appropriate real-world data from commercial platforms, could more effectively explore the trends and methodologies of targeted disinformation campaigns. This research could also help to better identify segments of the population most susceptible to disinformation campaigns and guide resources for media literacy efforts. This research should also include the development of technical tools to analyze disinformation across platforms and identify inauthentic content such as deep fakes.
• Establishment of an information sharing and analysis organization to bring together government entities, research institutions and private-sector platforms. The organization could facilitate information exchange through a trusted third-party. The organization could serve as an information center that would pool expertise and track disinformation trends and methods.
• Encouragement of media organizations to promote the need for healthy skepticism by their users when consuming online content. This includes providing media literacy resources to users and enhancing the transparency of content distributors.
• Expansion of media literacy programs to build societal resilience in the face of disinformation campaigns. Media literacy could be framed as a patriotic choice in defense of democracy. Public education through advocacy groups like AARP, which can tailor the message of media literacy for their members, could be an effective means of encouraging the adoption of healthy skepticism towards online information.

How Social Media Platforms Enable Disinformation Campaigns

Since the rise of social media, threat actors, whether individuals, nation-states, or other organized groups, have exploited the information environment on an unprecedented scale. Unlike the publication and distribution of print sources, which require publishing houses, editors, proofreaders, promotional advertisements, and bookstores, online information does not require an intermediary between content creator and consumer. As public confidence in mainstream media outlets has waned, interest in social media platforms and other online forums that offer uncensored communication channels to share ideas and commentary has increased.

Though these platforms typically do not require payment from users in order to establish an account or access content on the platform, they are not cost-free. In exchange for granting users free access to these platforms, platform owners gather user data that enable advertisers to tailor online advertisements to known user preferences. In this arrangement, users are spared from content they have little interest in, platform owners can study user behavior to determine how to maximize the time users spend on the platform, and advertisers can serve up content more likely to engage users.

The key to this system is the attention of users. The more alluring the content, the greater the time on the platform, and thus the greater the potential profit. Therefore, social media platforms have an incentive to provide their users with an array of clickbait because doing so increases the revenue generated by selling online advertisements.

Since the events of the 2016 U.S. presidential election, the phenomenon of disinformation campaigns has received a great deal of attention, not just in the news media, but from government, academic, and commercial platforms determined to identify and understand it. While research efforts are plentiful, there is still much to learn about these campaigns and how best to defend against them. While it is not appropriate to dictate what media content Americans consume, we can, as researchers, suggest that opportunities for collaboration across interested sectors should continue to expand and to encourage public education to build resilience.

In a media environment where mere popularity, attention, and trending imbue truth and legitimacy, the internet can become a turbo-charged rumor mill with no editorial board. Disinformation can generate a lot of activity in a very short period of time, but whether this disinformation amounts to little more than noise in the system or represents a genuine threat is often not readily apparent. This paper emphasizes the importance of understanding targeted disinformation campaigns in the interest of hardening public defense against them. This includes understanding the threat actors who propagate these campaigns, how users are prone to them in a complex information environment and gaining the ability to identify these campaigns through their tell-tale signs.

Share this:

Facebooktwitterredditlinkedinmail