Abstract
Disinformation and other forms of manipulative, antidemocratic communication have emerged as a problem for Internet policy. While such operations are not limited to electoral politics, efforts to influence and disrupt elections have created significant concerns. Data-driven digital advertising has played a key role in facilitating political manipulation campaigns. Rather than stand alone incidents, manipulation operations reflect systemic issues within digital advertising markets and infrastructures. Policy responses must include approaches that consider digital advertising platforms and the strategic communications capacities they enable. At their root, these systems are designed to facilitate asymmetrical relationships of influence.
This article examines the intersection of online political manipulation and digital advertising. While there is an emerging consensus among international policymakers that manipulative online communication presents a growing challenge to democratic processes, there have been relatively few attempts to understand the linkages between manipulation campaigns and digital advertising systems. Addressing this gap, this study presents a diagnosis of how digital advertising infrastructure, as it is currently designed and managed, creates opportunities for political manipulation and foreign interference. Data-driven ad technologies enhance the influence that advertisers can have on-target audiences by leveraging detailed information about individuals, often without their knowledge or consent. As Ravel, Woolley, and Sridharan put it, data-driven advertising is designed “like a one-way mirror” in which campaigns and tech platforms “can see the public, but the public cannot see them.”1 Both foreign and domestic operatives can exploit such ad systems to influence political behavior and discourse through deceptive means that use data to pinpoint cognitive and psychological vulnerabilities to influence individuals and groups.
We offer an assessment of policies for addressing the use of digital advertising systems by foreign operatives and other manipulative agents trying to influence elections, shape political discourse, inflame social division, and undermine democracy. Our policy recommendations synthesize and build upon ideas from a review of over two dozen reports released between 2017 and early 2019 by research institutes, civil society groups, and government inquiries in North America and Europe (see bibliography). While most reports cover a wide spectrum of problems ranging from privacy to data security, a unique feature of this research is that we concentrate specifically on policy responses to manipulation campaigns' use of digital advertising. While much of our discussion applies to the digital advertising industry generally, we focus particularly on social media platforms because of their centrality as spaces of online political discourse and their dominant position in the global advertising market.
Manipulation campaigns and foreign influence operations rarely rely exclusively on digital advertising—they also create deceptive front groups, make use of imposter social media accounts, game search engine algorithms, and deploy bots to distort online conversations, among other tactics. The many challenges of the global information ecosystem are deeply interconnected and policymakers must respond to those challenges through a broad range of initiatives, as we argue in the conclusion. Nonetheless, we believe that policies directed specifically toward digital ad systems represent some of most urgent “low-hanging fruit” for tackling these problems. We agree with former Facebook chief security officer Alex Stamos that the advertising components of social media “have the most capability for abuse generally,” while regulating their capacities poses “the least free expression concerns.”2 So in this article, we focus on the workings of digital advertising before expanding to consider how problems related to ad systems are linked to other public policy challenges posed by social media and other online environments.
A central finding of this study is that digital ad systems have been built with capacities that can easily be weaponized. When political operatives weaponize ad tech, they use it to identify weak points where groups and individuals are most vulnerable to strategic influence. In such cases, individuals' data is turned against them and used to help political advertisers more effectively influence their targets. While identifying and removing manipulation campaigns is an important effort, we argue the most effective responses to political manipulation must do more than try to remove “bad actors” from abusing digital ad systems. Rather, the very capacities of digital ad systems that facilitate such weaponized communication need to be recalibrated to better serve democratic ideals. We discuss a range of policy proposals to address these issues in the short and medium term, including increasing advertising transparency, expanding the data rights of individuals, and attenuating advertisers' capability to carve audiences into smaller and smaller segments.
Some government entities, understandably, will be concerned first and foremost with foreign-controlled interference operations. Yet, this article frames the advertising infrastructure facilitating political manipulation as itself a liability for democratic societies—whether campaigns are run by foreign or domestic operatives or some combination. While we recommend greater transparency into the funding of advertising that would help identify and (potentially) eliminate foreign-funded ads, most of the recommendations we offer aim to curb manipulative capacities as a whole. We think there are several reasons that even those most concerned about foreign interference operations should consider this approach. First, since the discovery of foreign interference operations in 2016, state-connected actors have become more sophisticated at covering their fingerprints in digital space.3 Second, foreign actors may recruit domestic operatives to run campaigns on their behalf—either through coercive means such as blackmail and bribery or based on ideological affinity. Third, as we argue in this article, the current digital advertising infrastructure incentivizes political campaigns to target fragmented communities and amplify social division. An atmosphere of heightened divisiveness and social fracture creates conditions favorable to antidemocratic election interference operations, whether or not they make use of digital advertising.
This article proceeds by defining the scope of the problem of political manipulation campaigns and the role of digital ad systems in such campaigns, summarizing the capacities of contemporary digital ad systems, outlining how these capacities become weaponized by political operatives, examining promising policy responses to these problems as well as limitations to and uncertainties about implementing such policies, and finally placing the problems of manipulation campaigns in a broader context beyond ad systems and outlining what we see as the most crucial policy questions for building more democratic digital media environments.
What Role Does Digital Advertising Play in Political Manipulation?
In recent years, governments have begun to recognize and respond to an emerging set of problems associated with manipulative online political communications.4 Researchers and policymakers have used a number of terms to describe these problems. In the wake of the 2016 Brexit vote and the US Presidential election, the term “fake news” spread quickly among researchers and journalists to designate egregiously inaccurate news stories that were being widely shared on social media. These stories were largely created by either small entrepreneurs angling for profit from clickbait or partisan operatives using false news as an influence tool. However, the term “fake news” was confusing because it could be used to refer to quite different kinds of content, from news satire to good-faith journalistic mistakes to blatantly false news fabricated for profit.5 Populist politicians quickly seized the term and started labeling any critical coverage of them as “fake news.”
More recently the global conversation among researchers and policymakers has shifted toward framing communication problems within the digital media landscape as matters of “misinformation” or “disinformation.”6 Misinformation generally refers to “information whose inaccuracy is unintentional,” while disinformation designates “information that is deliberately false or misleading.”7 Framing the problem exclusively in terms of inaccurate information, however, can itself lead to inadequate responses if actions are limited to fact-checking efforts to identify and remove strictly false information from social media.8
In this article, we use the term “manipulation campaigns” to name a range of deceptive communication strategies that use data-driven advertising to target vulnerabilities to influence, in attempts to shape discourse or behavior to meet strategic objectives. Manipulation campaigns keep some aspects of their operations hidden from their targets, but they do not necessarily traffic in false information. For instance, a domestic manipulation campaign sought to influence a US Senate race in 2017 by creating an online front group called “Dry Alabama” on social media.9 The group promoted a statewide ban on alcohol and used statistics—not necessarily false ones—about car crashes and alcohol-related deaths. Yet, it was not connected to any genuine effort to ban alcohol in Alabama; rather the operatives behind “Dry Alabama” were trying to bring prohibition to the political foreground in order to exacerbate divisions over the issue among Republican voters.
While the digital tactics of manipulation campaigns can be used by anyone, the most sophisticated campaigns will likely be backed by actors able to devote considerable resources to these efforts. Bradshaw and Howard identify the most powerful of such groups as deployments of cyber troops: “government or political party actors tasked with manipulating public opinion online.”10 Such agents may engage in setting up impostor social media personas, make use of bots and automated accounts, and exploit recommendation and search algorithms to disseminate their messages. Digital ads can play a vital role in many of these operations. In perhaps the most well-reported example, a Russian organization called the Internet Research Agency (IRA) spent “thousands of U.S. dollars every month” on social media ads and promoted posts in efforts to influence US elections in 2016.11
In their review of global cases of computational propaganda, Bradshaw and Howard report that cyber troops are making “increasing use of paid advertisements and search engine optimization on a widening array of Internet platforms.”12 They find that cyber troop campaigns are often spending large sums of money, and some are drawing on the expertise of “political communication firms that specialize in data-driven targeting and online campaigning.”13 As two analysts put it in their summary of the US 2016 Presidential Election, political manipulation campaigns are “digital marketing 101.”14 Platforms go to extraordinary lengths to collect data on users and their behavior to allow advertisers to decide just who they want to target with what approach. Advertising creates paid priority lanes where message senders can leverage the knowledge acquired through surveillance and profiling. This opens the possibility for weaponizing this data-driven system in the ways we describe in the following.
What Are the Capacities of Data-Driven Advertising That Enable Manipulation Campaigns?
Global digital advertising is estimated to be a US$327 billion industry in 2019.15 Major players include ad platforms such as Google and Facebook, advertising agency conglomerates such as WPP, and a range of data specialists and information technology companies such as data brokers and data management platforms. These companies provide advertising services that leverage consumer data via a massive surveillance infrastructure. This infrastructure offers an increasingly sophisticated toolkit for influencing targeted publics and is readily applied to political objectives. Growing evidence shows that digital advertising has been put to political use not only by official electoral campaigns, but also by special interest lobbies, foreign state actors, and domestic dark money groups.16
Fierce competition has propelled digital advertising companies to build innovative mechanisms for influencing consumers. As Google states in its marketing materials: “the best advertising captures people's attention, changes their perception, or prompts them to take action.”17 Advertising formats are varied and not always easily recognizable as paid communication. Digital display and video ads run alongside an array of search keywords, promoted social media posts, sponsored content, and native advertising formats, all of which can be targeted to highly specific audiences across social feeds, mobile apps, websites, and other channels. Highly segmented message targeting, through digital advertising, can help spur “organic” amplification and generate human assets for information operations.18
Major ad platforms have typically operated as open marketplaces, available to any advertiser who meets basic quality standards. Responding to controversies, platforms have tightened restrictions in recent years, implementing various protocols for advertiser authentication and restricting access to ad services for certain groups. For example, Facebook now requires advertisers in certain countries to “confirm identity and location before running political ads and disclose who paid for them.”19 As we discuss in the following, while such policies seem to take first steps in the fight against political manipulation, these efforts can be circumvented by influence operations with relative ease. The scope and implementation of these systems, which can vary significantly, require careful regulatory scrutiny.
Digital ad infrastructure provides three key interlocking communication capacities.20 The first is the capacity to use consumer monitoring to develop detailed consumer profiles. The second is the capacity to target highly segmented audiences with strategic messaging across devices and contexts. The third is the capacity to automate and optimize tactical elements of influence campaigns. There are numerous technical means that have been developed to enable these capacities.21 Like all advertising formats, digital ad spaces are designed to offer advertisers many choices and options. However, we see these three capacities as core features that have been built into the digital infrastructure as essential, top-level features that enable today's data-driven advertising practices.
Surveillance and Profiling
Digital advertising depends on the collection and exchange of vast quantities of online and offline data about individuals. Social media platforms, advertising networks, data brokers, and many other parties record and synthesize a wide range of consumer data across applications and devices in order to more effectively target them with ads.22 Social media companies such as Facebook are especially prodigious in generating data from closely monitoring their users. Research conducted by ProPublica showed Facebook was using at least 52,000 attribute categories to classify its two billion users.23 Among the data Facebook collects from its own services are user posts, reactions to posts, profile information, social connections, data extracted from photographs and video (including facial recognition data), information on user logins, and, at least at one point in time, posts that users “self-censored” (i.e., composed but did not actually publish).24
Data gathered firsthand is often merged with third-party data to enrich consumer profiles and enable ad distribution mechanisms such as the widely used “real-time bidding” (RTB) systems.25 Distinct data points are attached to people and devices via unique persistent identifiers, which are then stored in profile databases. One of the longest running persistent ID technologies is the HTTP cookie, which now operates alongside a host of other identifying mechanisms.26 Using persistent IDs, ad platforms continuously update profile records with new information, which over time provides insights into individual identities, interests, behaviors, and attitudes. Facebook, for example, partners with large numbers of websites and mobile applications to share data for profiling and targeting.27
The value of this for advertisers and influence campaigns lies not simply in individual data points but in the inferences and behavior predictions that can be drawn from large pools of data. In some of the most controversial cases, advertisers have tried to develop “underlying psychological profiles” to create influence campaigns customized to psychological dispositions.28 As early as 2013, researchers developed methods to reliably infer sensitive personal attributes based solely on Facebook “Likes” data; 29 these included personality traits, political and religious views, intelligence, happiness, and sexual orientation. As psychological profiling and predictive analytics have advanced, advertisers have found opportunities to design campaigns around characteristics and traits that have not been self-disclosed by the targets.30 Such inferences have been made available to target, or exclude, politically sensitive groups for social media ad campaigns.31
Microtargeting
Using observed data, inferred insights, and a range of contextual information, advertising is targeted in ways that seek to maximize impact and efficiently produce desired influence outcomes. Individual profiles are grouped into addressable publics through a variety of targeting mechanisms that govern both audience composition (selecting who sees a particular message) and ad placement (determining when and where particular ads are shown). Facebook's full-service ad platform illustrates key elements of this targeting capacity. Advertisers can use the built-in Ad Manager to manually select targeting criteria from among many thousands of possible attributes. To enable more precision, Facebook's Custom Audience tool allows advertisers to target specific groups by uploading lists of identifying information such as e-mail addresses or voter registration records. Using predictive analytics, Facebook's Lookalike Audience feature “clones” audiences that share certain attributes with targeted publics. While some major platforms have tried to prevent advertisers from directly targeting based on sensitive attributes such as ethnicity, researchers have found that malicious advertisers can still target these groups by using proxy criteria.32
Microtargeting that is designed to exploit personality traits and psychological profiles has been found to be particularly effective.33 In 2017, leaked documents revealed that Facebook claimed the ability to predict its teenage users' emotional states to give advertisers the means to reach those who feel “worthless,” “insecure,” and “anxious.”34 That same year, the British Army ran a recruitment campaign on Facebook that targeted 16-year-olds around the time that standardized test results were released, typically a moment of heightened uncertainty.35 Some of the ads suggested that students who were discouraged by their results might register for the army, rather than say, pursue further education. In 2015, antiabortion groups employed a digital ad agency to use mobile geofencing targeting to send ads to women who visited reproductive health clinics in states across the United States. The ads, which included messages such as “You have choices,” were triggered via GPS location data and were served to women for up to 30 days after leaving the target area.36
Automation and Optimization
While targeting parameters can be manually configured with great precision, digital advertisers increasingly use automated decision-making systems to test and optimize the composition of target publics as well as the timing, placement, and even content of ad messages.37 Ad tech infrastructure gives advertisers the capacity to offload key tactical decisions to specialized systems that continuously incorporate the results of multivariate experimentation to improve performance.
RTB systems, used to dynamically place ads across media channels, increasingly incorporate machine learning systems to evaluate the results of large numbers of placements in order to determine which consumer attributes are the most predictive of desired influence outcomes.38 Techniques for “content optimization” apply a similar logic to ad messaging. Through methods like split testing (also called A/B testing), advertisers can experiment with large variations of messaging and design to find what works. For instance, consider a campaign that wants to determine how to inflame feelings among rural and exurban communities that they are being looked down upon by urban elites. This campaign may test scores of slogans and images to see which combinations receive the most shares and engagements among which specific microtargeting parameters. Potentially, the campaign may find that tastes categories, such as an interest in certain music groups or genres, can be used to predict which style of slogans and ads work best for which targets. These and other techniques help advertisers customize outreach to individuals based on forecasts of their vulnerability to different influence strategies and through repeat engagements, attempt to home in on the most influential persuasion strategy for each user.39 Well-resourced political campaigns have reportedly experimented with thousands of ad variations to see which are the most effective. Advertisers can use such tools to determine what issues resonate with particular targets as well as test for fears or prejudices that can be invoked to influence political behavior.
These systems bring significant speed and cost advantages, allowing advertisers to quickly and efficiently tailor their efforts to meet particular strategic objectives. Campaigns can be optimized for individual behaviors like clicks and video views, but they can also be tuned to elevate particular conversation or promote social interaction.40 As standard practice, digital marketing campaigns are coordinated across multiple platforms and channels and paid advertising is often deployed in conjunction with other promotional techniques. Tools such as social media management services enable advertisers to operate complex multiplatform campaigns and use automated decision-making systems to “optimize persuasive power for every dollar spent.”41
How Do Manipulation Campaigns Weaponize Digital Advertising?
It is difficult to imagine the vast and sprawling infrastructure of monitoring, profiling, and optimizing that fuels data-driven advertising could have been built without public reassurances of its benign purposes. In communication with citizens and regulators, representatives of the digital ad industry have continually promised that the aim of this sophisticated targeting is simply to make advertising more efficient, which benefits consumers and advertisers alike. For instance, the Digital Advertising Alliance of Canada proclaims targeting results in “better ads” for user because, “when advertisers use online interest-based advertising tools, you get ads that are more interesting, relevant, and useful to you.”42
Data-driven advertising has proven to be a dual-use technology. It can be used to help match Internet users with ads for products that fit their predicted interests. Yet, it can also be used against users' interests. When advertisers use data-driven advertising to target weak points where groups or individuals are most vulnerable to strategic influence, they are weaponizing digital ad systems. Weaponizing digital advertising turns the data that the industry champions as a way to identify consumers' interests into data that can be used to shape and modify political behaviors and attitudes.
Ever since the ad industry has been making the public case for the mutual benefits of targeted ads, some advertising firms have been exploring how to take advantage of digital targeting in ways that are clearly contrary to the “everyone benefits” spirit. One avenue has been through advertisers' uptake of research from behavioral and cognitive science. Scientists have identified what behavioral economist Dan Ariely refers to as the “predictable irrationality” of human decision-making.43 Advertising firms have tried to develop techniques whereby they can intervene upon the heuristics and patterned flaws in human decision-making to influence decisions through precisely targeted and timed ads.44 In one illustrative example, a marketing firm advised beauty product advertisers to figure when and in what situations women feel most “vulnerable about their beauty” and use those moments for strategic targeting.45
Both commercial and political advertising have long raised concerns about manipulation. There is a robust critical communication literature that analyzes tactics advertisers use to influence desires, fantasies, and cultural meaning-making processes.46 In many respects, there are significant continuities between digital techniques and older advertising practices. It would be a mistake to only emphasize rupture in critical approaches to data-driven ads. That said, today's digital media landscape enables targeting at an unprecedented degree of precision and at an unprecedented scale. New technical capacities in tandem with laissez-faire regulatory regimes have had enormous consequences for advertising.47 Among other shifts, data-driven advertising has led to a “behavioral turn” in marketing theory. Marketers and political operatives are embracing models of human decision-making informed more by behavioral science than the psychoanalytic and semiotic models of the human mind that informed so much of twentieth century advertising. This behavioral turn has inspired an approach to both commercial and political advertising that focuses on strategically intervening in targets' decision-making processes.
Political manipulation campaigns also weaponize digital advertising by using it to identify and target vulnerabilities to influence, though in some cases political weaponization tactics look different from those of commercial manipulators. Commercial manipulators tend to rely on behavioral science to identify individual cognitive vulnerabilities. They are looking for strategic points of behavioral modification or decision influence, such as identifying particular types of moods when someone might be influenced to make a purchase they otherwise would not. Political manipulators may use some of the same behavior modification techniques as well; however, most known political manipulation campaigns focus more on amplifying or channeling group-based identity threats. As the noted political psychologist Leonie Huddy says, “group identities are central to politics, an inescapable conclusion drawn from decades of political behavior research.”48 A large body of research suggests that when people perceive threats to a personal identity—whether those threats be bodily, material, or symbolic—that identity itself tends to take on more salience.49 Invoking identity threats can mobilize political action through promoting calls to stand up for one's threatened in-group. Or it can justify denigration of or attacks on the out-group perceived as threatening.
Social media companies have still not released all the data necessary to offer a fully detailed portrait of the manipulation campaigns over their ad platforms. Yet, there is one specific case that has received the most scrutiny—the Russian IRA's attempts to interfere in US politics, especially from 2015 to 2017. Public outcry and political pressure from the US Congress led Facebook, Google, Twitter, and others to make available much more detail than usual about this particular manipulation campaign. Two in-depth studies of IRA operations in the United States—one led by New Knowledge, the other by Oxford's Computational Propaganda Project—show how data-driven advertising allowed the IRA to target specific groups with content intended to inflame identity threats and exacerbate social division.50 The IRA campaigns involved both advertising and peer-to-peer (organic) content, but targeted advertising appears to have played an instrumental role in seeding the spread of the organic content by building followings for inauthentic accounts.51
The IRA campaign leveraged identity threats both to mobilize support for candidates and issues they sought to aid and to splinter opposition groups or suppress voting from groups likely to support opposition candidates. Researchers found that during the 2016 campaign, the IRA messages aimed at conservatives evinced “a clear and consistent preference for then-candidate Donald Trump from July 2015 onward” while the IRA was also “strong and consistent in their efforts to undermine the candidacy of then-candidate Hillary Clinton throughout all of their pages.”52 The IRA campaign portrayed conservative-leaning groups facing identity threats—such as a cultural takeover through immigration, job losses to be precipitated by extreme environmentalists, and liberal accusations of conservatives as bigots—in efforts to encourage “extreme right-wing voters to be more confrontational.”53 At the same time, the IRA campaign sought to leverage identity threats to break apart Democratic coalitions, with a special focus on targeting those with interests in racial justice activism and African American heritage. The Computation Propaganda Project found the IRA consistently sought to encourage “African American voters to boycott elections or follow the wrong voting procedures in 2016, and more recently for Mexican American and Hispanic voters to distrust US institutions.”54
There are a number of factors that make targeted digital advertising a particularly attractive tool for manipulation campaigns seeking to exploit social division. Of the IRA operations, New Knowledge researchers conclude:
They exploited social unrest and human cognitive biases. The divisive propaganda Russia used to influence American thought and steer conversations for over three years wasn't always objectively false. The content designed to reinforce in-group dynamics would likely have offended outsiders who saw it, but the vast majority wasn't hate speech. Much of it wasn't even particularly objectionable. But it was absolutely intended to reinforce tribalism, to polarize and divide …55
Digital ad systems offer a great advantage for such efforts over mass audience print and broadcast media. First, microtargeting allows advertisers to carefully profile and target those suspected to be most susceptible to a specific identity threat. Second, well-targeted ads can be more inflammatory than mass ads without risking counterproductive effects. With mass advertising, political operatives know that such strategies can activate backlash effects that can outweigh their goals.56 Third, the claims made by precisely targeted ads are unlikely to be questioned or challenged in their native media environments. Only audiences deemed likely to respond well to the ads are likely to see them.57 Fourth, popular social media are designed to favor the distribution of content that triggers immediate and strongly emotional responses.58 Lastly, digital ad systems allow for manipulative operatives to continually refine their approaches through testing multiple variants of an ad (through split testing) with different audience parameters. A well-funded campaign can test tens of thousands of ad variants daily.59
Policy Approaches to Preventing Political Weaponization of Digital Advertising
As foreign interference and manipulation operations prompted global discussion of social media and disinformation campaigns, major social media companies started to announce they were ready to take on new responsibilities. Prior to 2016, major social media companies had generally showed little concern as to whether their networks helped circulate disinformation or manipulative content. Public outcry and political pressures forced a reckoning and company CEOs announced new commitments to fighting these uses of their platforms. Regulations surrounding digital advertising have not kept pace with rapid technological developments. Nor have regulators been able to consider the more gradual paradigm shift represented by digital advertising as data-driven targeting and testing have become central features of persuasion.
We concur with investigators who see self-regulatory efforts by social media and data service industries as severely inadequate responses to the threats weaponized advertising poses to democratic communication.60 Tech companies will need to provide technical input and expertise in tackling problems of political manipulation, but the demonstrated shortcomings of their self-regulatory actions so far show that state action is required. In addition to demonstrated failings of tech companies to respond by themselves, self-regulation approaches have a number of inherent weakness: there is no powerful position for public advocates when industry interest and public interest diverge; self-regulations are difficult to enforce; they may not be well coordinated across firms and are subject to change without public or democratic input. Tech companies generally offer members of the public no reliable form of redress when individual or group harms are incurred.
In the following, we review and assess a wide range of policy options for regulatory approaches to thwarting foreign and domestic manipulation enabled by digital advertising. We are reviewing approaches suggested by researchers and government officials discussed in the documents listed in the bibliography. Our goal here is not to cover all recommendations from these reports comprehensively; rather, we synthesize what we see as the most promising ideas that pertain to data-driven political advertising. While we think some of these recommendations would be relatively simple to implement and enjoy wide popular support across many regions and countries, we also discuss their complications and tradeoffs. Different localities will need to fine-tune and adapt their own policies. In general, we want to suggest that diverse stakeholders should have input in the adaptation of policies governing digital advertising. Yet, the need for democratic input should not be used as a justification for delaying the implementation of new regulations. Policymakers must take actions so that most of the decisions regards digital ad systems are not left to the unilateral control of private companies, which understandably put their own pecuniary interest above other considerations.
Our recommendations are based on the diagnosis above identifying the basic capacities of digital advertising systems that enable weaponized political messaging. This “infrastructure approach” tackles the problem by examining how policy might dampen the communication capacities of data-driven advertising that allow operatives to target vulnerabilities to influence, as well as how digital media infrastructure can be designed in ways that positively promote free and open democratic communication.
The infrastructure approach differs most clearly from two other prominent approaches to the problem: a militarization approach and a bad actors approach. The militarization framework entails countries investing in greater surveillance of digital media—and potentially control over—by military and intelligence agencies. Proponents of this approach also tend to advocate for deterrence of foreign interference attacks through counterattacks, sanctions, or diplomatic measures. We should note that encouraging greater surveillance and militarization of digital communication introduces its own threats to free and open democratic communications. These drawbacks must be fully explored, and their potential benefits carefully weighed in light of other options that do not introduce the same threats.
The “bad actors” approach is the most cautious and least disruptive to advertising business models. In this case, policymakers or tech companies focus their efforts on trying to identify a select set of “bad actors”—such as foreign agents—responsible for engaging in political manipulation. This approach tries to remove these troublemakers or the problematic content they have produced without reform of the digital advertising architecture of data collection, message targeting, and testing that provides opportunities for manipulation. Many of the measures introduced by Facebook, Google, and other attempts at industry self-regulation over the past two years fall under this category. As discussed earlier, these measures have demonstrable weaknesses and the number of actors—both foreign and domestic in many countries—engaging in manipulative campaigns appears to be increasing over the past two years.
One of the overarching challenges for any policy that applies special scrutiny or regulation to “political” advertising is deciding exactly what counts as a “political” advertisement. There are three major challenges to this task. First, the scope of “political” advertising is layered and difficult to define. One approach to defining what kinds of digital ads count as “political” focuses only on ads pertaining directly to elections. Many countries apply special regulation to advertisements that mention specific candidates running for office or ads promoted by political parties, candidates, or official groups supporting candidates. Such a narrow definition of “political,” however, creates giant holes in the filter that would not catch many of the advertising techniques manipulative actors use.
Manipulative campaigns may seek goals beyond election influence, including influencing public discourse about a specific issue or simply trying to amplify social divisions within a democracy to make it less stable. There is strong evidence that some of the campaigns connected to the Russian IRA were oriented toward this latter goal, as IRA accounts have promoted competing sides of the same agenda. For instance, IRA-linked social media accounts promoted two competing rallies set in Houston on the same day in 2016. One IRA page “United Muslims of America” ran ads promoting a rally to “Save Islamic Knowledge,” while another IRA promoted the rally to “Stop the Islamification of Texas.”61 This kind of activity would fall through the gaping holes of policies that apply only to ads focused on electoral candidates.
Even campaigns directed specifically to interfere with election outcomes may use ads that do not mention specific candidates or races. One analysis reviewed all 3,517 ads connected to the Russian IRA that targeted US Facebook users from June 2015 to August 2017. While much of this activity occurred prior to November 2016 and likely sought to influence the US elections, the analysts found that only a very small fraction mentioned any candidates by name.62 Many more ads did not mention candidates but sought to exacerbate tensions around race and other social identities and cleavages that might have had electoral impact without mentioning candidates or parties.
A broader approach to defining the term “political,” should seek to include discussion of key political issues, in addition to references to specific candidates or parties. In a number of countries, Facebook has been identifying advertising as “political” if their content is “related to politics or issues of national importance.”63 This is generally stronger than Google's current policies which, in the case of the European Union (EU; though similar for other countries where Google has rolled out this policy), only includes ads “that feature a political party, or a current elected officeholder or candidate for the EU Parliament.”64 Nonetheless, blind spots persist in Facebook's approach that indicate the difficulty of coming up with any comprehensive definition of “political.” Facebook's definition focuses only on national issues and does not cover ad campaigns targeting local issues. As resources for local news production decline rapidly in market news economies, influence operations aimed at local levels may actually be the most powerful. Furthermore, exactly what counts as an issue of “national importance” is a matter of debate. Facebook's list of issues of national importance for the United States includes “infrastructure” and “government reform.” Yet it is unclear whether ads addressing, for instance, issues relating to technology regulation would qualify.
Second, even with a settled definition of political ads, ad platforms will find it challenging to identify which ads fall under that definition at a mass scale. In June 2018, Facebook representatives told UK investigators, “Our systems do not have a perfect or reliable way to classify the category that advertisements (which are developed and distributed by third-parties on our platform) fall in, whether it is political or housing or educational or otherwise.”65 This problem is hardly unique to Facebook, as many online ad platforms sell exponentially more ads and ad variants than traditional media outlets and do not have humans review their content. Ravel, Wooley, and Hamsini warn, “algorithms used to police social media platforms are vulnerable to the biases and fallibility of their producers.”66 In their view, “Technology companies created the problems on their platforms that they now claim necessitate the use of technologies that haven't yet been realized—asking for trust that they have not earned.”67 Even as automated decision-making and machine learning advance, such processes will need oversight for bias and democratic accountability.
Third, manipulation campaigns can use tactics to circumvent any definition of “political.” Political manipulation efforts might rely on targeted ads in some cases in which the ads themselves do not refer to political content. Instead, these campaigns might try to target a group with ads that promote an ostensibly nonpolitical social media feed or group. Only after gaining followers for such a group would the campaign start to introduce political themes into that feed. For instance, imagine a campaign runs targeted ads for a Facebook group that appears to be a support group for people suffering from social anxiety disorder. The campaign may try to gain the trust of followers in this support group, only to later introduce political messages to the group once they have gained trust. The Russian IRA tried tactics along these lines, in one case promoting a hotline for masturbation addiction targeting followers of a page they had created that appears to be a site for deeply religious Christians.
Categorizing political speech or political data essentially draws a box around something and says: “this is political.” What is in the political box, then, is recognized as important to democracy and subject to special rules and standards to reflect that status. Yet, the box does not contain everything, so what is not in the box is held to different rules and standards. No box will perfectly encapsulate all political speech. Nonetheless, these earlier challenges should not justify inaction—they should encourage a capacious approach to defining political advertising. If implementing the policies discussed in the following for all advertisements is deemed impractical, regulations based on a broad understanding of “political” can still significantly restrict the options available to manipulative political operatives.
For any policies that rely on ad platforms to identify “political ads,” we recommended that regulators provide guidelines for defining “political” and open avenues for courts or regulatory commissions to provide oversight of the processes platforms use to identify political advertisements. Whatever body is tasked with setting these definitions—whether a public commission or private company—should seek input from civil and human rights organizations and diverse civil society stakeholders.
Transparency in Encounters with Ads
One common approach policymakers have suggested to prevent the weaponization of ads focuses on transparency surrounding ad practices. At a minimum, users should know who is targeting them and why they are being targeted. This same transparency principle also entails that information on targeted political ad campaigns is made available to independent researchers and journalists who can act as public interest watchdogs. Journalists, researchers, and independent auditors or regulators can make sense of larger patterns in targeted advertising from a vantage point distinct from the individualized experience of users. When these groups convey their insights to members of the public and public interest advocates, they serve a critical function in making the workings of digital advertising more transparent to users and democratic publics. Regulators across the globe have barely started to address the paradigm shift represented by personalized digital advertising. While not everything is new in digital advertising, both industry accounts and academic research suggest personalized advertising operates according to a logic quite different from previous iterations of ads.68 Such a shift may call for rethinking very basic issues about advertising and what kind of public interest ground rules may be needed. So far, the policies that have most affected digital advertising, such as the EU's General Data Protection Regulation (GDPR), have not directly tackled new concerns arising from targeted advertising, but rather have primarily addressed concerns regarding privacy and data rights. We suggest three overarching goals for policies regarding targeted political advertising:
Ads platforms should help users to make informed interpretations and judgments about political advertisements.
Ad platforms and political advertisers should provide governments, researchers, and civil society with information for audits and tools for efficiently monitoring ad campaigns as a key component of democratic oversight.
Ad platforms must limit fraudulent, illegitimate campaigns (e.g., scammers, foreign operatives).
To operationalize these goals, we recommend a series of concrete policies aimed in increasing transparency in users' encounters with ads, and in funding and sponsorship.
Transparency in ad design, targeting, and profiling:
- A.
Require on-ad disclaimers informing users of ad sponsors, the specific targeting parameters used by the advertiser, and identification of all the sources of data used in targeting (e.g., platform activity, external browsing, public records).
- B.
Require all political ads' disclaimers also include a prominent link for more information about specific ads and their sponsors, including the amount of money spent on the ad, the time period the ad is set to run, the sponsors' identified donors, and a further link to all variants of the same ad.
- C.
Require all political ads' spaces to be designed in ways that foreground users' awareness that they are encountering a paid advertisement. Ghosh and Scott recommend, “All political ads that appear in social media streams should be clearly marked with a consistent designation, such as a bright red box that is labeled ‘Political Ad’ in bold white text, or bold red text in the subtitle of a video ad.”69
- D.
When users interact with digital ads—through clicks, likes, shares—a pop up should remind them that they are interacting with a paid political ad and explain any ways in which such an interaction may leave data traces that could influence future targeting.
Users' interests are not fully served simply by making information about political ads available if it is time-consuming to find. We must consider the incredible volume of ads users are subjected to online. As cognitive psychological research documents quite convincingly, humans must rely on cues able to be processed quickly to form mental impressions and make decisions.70 The design of advertising interfaces determines which features are salient and which are not. Advertisers would generally prefer ads' spaces designed with minimal contextual cues to prompt users to identify them as ads. In many cases, digital advertisers also prefer their messages blend into streams of so-called “organic” web content without calling attention to themselves as ads. Yet, we suggest there is a public interest in designing ads in ways that not only make such information available but bring it to the foreground of perception and processing.
Transparency and verification in ad sponsorship:
- A.
The sponsors and funding sources of targeted ads should have their identities verified. This could happen either through a public agency or strict requirements that place verification on ad platforms and exchanges. In the latter case, large ad platforms should be bound by “know your consumer” regulations for political advertisements. They should take all reasonable steps to accurately verify the full identity of the sponsoring organization, including the identity of its major donors. In this respect, ad platforms will be required to take on a responsibility to prevent foreign electoral interference and manipulation campaigns similar to the steps banks take to prevent money laundering.
- B.
“Dark money” should be eliminated in targeted advertising by requiring all political advertisements that make use of targeting data to identify all significant funders and donors.
- C.
The UK House of Commons reports, “Some organizations such as the Institute of Practitioners in Advertising (IPA) support creating a central public register of online political adverts, rather than leaving it to the social media companies themselves.” We see this as a promising recommendation that would benefit the public interest by eliminating inconsistencies among platform-specific archives.
- D.
Government or civil society commissions should partner with ad platforms to create criteria and procedures for identifying “inauthentic activity” from digital advertisers that create false appearances.
Discussion of Transparency
Major tech companies such as Facebook and Google have already started to implement their own policies requiring certain types of political ads in some countries to include a disclaimer naming a sponsor and to go through a verification process. These verification processes, however, have proved feeble. Just before the 2018 midterm election, a VICE news investigation team “applied to buy fake ads on behalf of all 100 sitting U.S. senators, including ads ‘Paid for by’ Mitch McConnell and Chuck Schumer. All 100 sailed through the system, indicating that just about anyone can buy an ad identified as ‘Paid for by’ a major U.S. politician.”71 Even if measures are put in place to prevent advertisers from impersonating elected officials, as long as sponsors are easily able to create front groups that provide little information about their donors, such disclaimers do little to provide meaningful information to citizens, journalists, researchers, or regulators. In certain circumstances, there are legitimate concerns about whether requiring identification of donors for political ads would potentially chill speech. We recommend policymakers consider tradeoffs carefully, but we think that given that targeted advertising relies on personal users' data there is more justification for limiting anonymous speech by large donors in this area than others. Alternatively, policymakers could require that platforms specifically ask users if they are willing to allow their data to be used by political groups that do not disclose all major donors. We suspect that requiring such explicit permission from users would effectively end the practice of anonymously funded targeted ads.
Requiring ad platforms to stringently verify the identity of sponsoring organizations and their financing is a separate issue from requiring sponsors to make their donors public. Making ad sales contingent upon verification is a crucial step to preventing undisclosed foreign influence operations from using targeted advertising. If the burdens of a rigorous verification process significantly disadvantage small advertisers, policymakers could consider whether to place spending thresholds below which advertisers could use a less rigorous process, though ad platforms would need to take steps to prevent abuse of this leniency.
Data Rights
Ad-supported manipulation strategies depend on the widespread collection and exchange of consumer data. Social media platforms and other companies in digital marketing have historically faced few restrictions on their data practices. This “wild west” scenario is shifting as policymakers across the world have begun to implement various data protection and privacy regulations.72 The most important of these is the EU's GDPR, which provides individuals with a range of data rights and privacy protections and is often cited as an international model for policymakers.73
In general terms, data rights give individuals control over the ways in which their personal information is collected and used. This approach prioritizes individual autonomy and is grounded in the principle of informed consent. Paired with robust enforcement mechanisms, data rights can shift the asymmetries of information and control that characterize individuals' engagement with advertising platforms.
Empowering individuals to control how their information is used and exchanged could significantly “blunt the precision” of the profiling, audience segmentation, and targeted messaging systems that have proven ripe for political abuse.74 Strong opt-in regulations in the model of the GDPR would likely reduce the supply of targeting data and audience attention available to manipulation campaigns, limiting their effectiveness and tactical options. Limiting the overall supply of advertising data would also cut down the potential for security breaches.
Proposals under the data rights framework include
Political profiling and ad targeting should be strictly opt-in services that require individual consent. Following the GDPR model, valid consent must be obtained in advance and be “freely given, specific, informed and unambiguous.”75
Because ad profiling and targeting systems use a wide array of data, consent-based data rights measures should apply to a broad scope of personal information.
Consent should require periodic renewal to reflect the fact that data practices, as well as individuals' privacy preferences, change over time.
Consent should be obtained on as granular a basis as possible. Ideally, individuals should have control over how their data is used not only by platforms, but by specific advertisers.
Discussion of Data Rights Approaches
Models of consent such as the “Notice and Choice” opt-out standard in the United States are largely ineffectual for stemming ad-supported political manipulation and are better understood as failed policies that abet a wide range of privacy harms.76 In contrast, the GDPR provides a strong model for codifying “opt-in” consent to process personal data, stipulating that valid consent must be “freely given, specific, informed and unambiguous” and must be obtained in advance of data processing.77
As the GDPR makes clear, meaningful consent requires absolute transparency and clarity from data processors when making disclosures about data practices.78 Any application of consent-based regulations to problems of ad-supported political manipulation must be linked with the kinds of transparency measures we discuss earlier. Additional GDPR protections like “purpose specification” and “use limitation” are meant to ensure that data can be used only for purposes that are specifically agreed to by the individual. Generally speaking, explicit consent is required when data collected for one purpose is used for another. Robust opt-in regulations like these are potentially powerful disruptors of advertising-based political manipulation by providing a “shield against microtargeting.”79 Applied broadly, such measures would significantly impact not only social media companies and ad platforms but the wider data broker industry that has moved rapidly into political advertising in recent years.80 However, even more international cooperation may be required to properly regulate data brokers and ad exchanges while keeping web content accessible across international boundaries.
Not all implementations of data rights are created equal. Opt-out policy regimes, such as the “Notice and Choice” model in the United States, represent a version of data rights that do very little to prevent ad-supported political manipulation.81 Strong opt-in consent regimes like the GDPR, which has brought major changes to the digital advertising industry, could have significant impacts on ad-supported political manipulation. Impact depends on the degree to which people will choose to opt-out of ad targeting. This is an empirical question and should be studied by policymakers through surveys and obtaining opt-out data from companies' GDPR compliance activities. Evidence suggests high levels of dissatisfaction with online privacy at large.82 According to one recent survey, 68 percent of respondents found “tracking online activity to tailor advertisements” to be unethical.83
Consent-based data rights measures must apply to a broad scope of personal information if they are to be effective mitigators of advertising-supported political manipulation. Applying protections and rights to sensitive personal information (such as data revealing racial/ethnic origin, political opinions, or religious beliefs) is only a baseline. Ad profiling and targeting systems use many kinds of metadata and computationally derived data to infer and predict consumer information. Even the GDPR, which designates a category of sensitive personal information with special protections, leaves gaps for creative applications of political targeting by proxy and, according to the UK House of Commons, does not protect “inferred data.”84
Consent should never be granted in perpetuity, but should instead require periodic renewal to reflect the fact that data practices, as well as individuals' privacy preferences, change over time. Consent should also be able to be withdrawn at any time. Implementing expiration dates offsets some of the burden that consent-based approaches place on individuals to routinely and proactively maintain their privacy choices. Instead, the burden should be on advertisers and platforms to periodically reach out to individuals to obtain renewed consent.
The GDPR's consent requirements are triggered differentially across a range of variables, prompting some concerns about regulatory “gaps” and “loopholes.”85 For example, a number of conditions exist whereby consent is not required to process personal information. While a full discussion of these issues is beyond the scope of this article, the question of when consent should be required to use data for targeted advertising is of critical importance.
Leading social media platforms and ad networks operate at a massive scale, providing sophisticated communication tools to millions of advertisers with diverse motives and objectives. In such an environment, consent at the platform level should not imply consent across the board for all advertisers. We propose that consent requirements be applied to the core capacities of digital advertising—behavioral profiling and targeted messaging—and that consent be obtained on as granular a basis as possible to give individuals control over how their data is used by different advertisers.
One implementation of granularity would apply consent not only to the platforms that provide advertising infrastructure services, but to every advertiser that uses those platforms to profile and target individuals. Rather than simply asking individuals for blanket consent to all manner of targeted advertising (as Facebook has attempted to do under the GDPR86), permission could be obtained by individual advertisers on a platform by platform basis. For example, XYZ Political Action Committee (PAC) could be required to obtain consent from individuals before targeting them on Facebook, regardless of whether the PAC imports their own database of supporters or simply uses Facebook's baked-in ad targeting systems.87 If the PAC then wanted to reach those same individuals on another platform, further consent could be required. If split testing were used in any of these instances, separate and distinct consent could be mandated as well.
We contend that granular consent aligns with the spirit of GDPR's purpose specification requirement. Guidelines from the Article 29 Data Protection Working Party (precursor to European Data Protection Board) suggest that data processors should “consider introducing a process of granular consent where they provide a clear and simple way for data subjects to agree to different purposes for processing.”88 Current interpretations seem to understand targeted advertising as a single category of purpose. We argue that advertising contains a spectrum of purposes dependent upon advertiser identities, objectives, and targeting mechanisms. Policy should recognize that important distinctions exist between an ad campaign that uses profile data to target individuals about a divisive social issue and a consumer product campaign that uses demographics to reach a broad audience.
Granular consent extends the basic principle that people must be informed in order to make choices about how their data is used. This approach makes advertisers accountable to the targets of their influence campaigns, makes microtargeting more visible, and decreases the likelihood that people will be targeted by entities they do not trust.
We acknowledge that “granular consent” as outlined here would face significant challenges. Industry critics would likely argue that such plan would be too onerous, would produce bad user experiences, and would “kill innovation.” These may be valid concerns; however, it is worth pointing out that as a matter of course, the digital advertising industry has long levied such complaints against virtually all regulatory measures aimed in its direction. If companies truly believe that consumers want targeted advertising, the principle of granular consent should not present a major threat to their business model, design questions notwithstanding. The real pushback would stem from the fact that the likely result of granular consent is that most people would opt-out from many advertisers. In that case, the rollout of the GDPR is instructive, wherein major platforms have attempted to thwart meaningful consent via bundling and other means.
A growing body of research finds significant flaws in the notion that individual consent should be the core mechanism of data policy.89 Though sustained engagement with this literature is beyond the scope of this article, it is useful to summarize some key findings to clarify the point that data rights must be considered in conjunction with other policy frameworks to combat political manipulation. Market imperatives incentivize companies to game and undermine consent mechanisms and other forms of privacy controls by making them hard to find, difficult to use, or flawed by design.90 As tech journalist Will Oremus writes, these tactics allow companies to “mollify privacy critics while maintaining the status quo” of unchecked data collection and processing.91 Strong regulations like the GDPR attempt to account for these issues by requiring that consent be “freely given, specific, informed and unambiguous.”
Even when implemented in good faith, consent mechanisms burden individuals to make judgments about how their data will be used in an increasingly complex, and increasingly unknowable, information ecosystem. Daniel Solove, a leading legal scholar of privacy, frames consent within a “privacy self-management” paradigm, which fails to address a range of “cognitive problems” and structural constraints that “impair individuals' ability to make informed, rational choices about the costs and benefits of consenting to the collection, use, and disclosure of their personal data.”92 While consent laudably seeks to prioritize individual autonomy, privacy self-management is undermined by the structure of data collection markets. In a system where data is collected over time, exchanged, combined, and used in unpredictable ways by various entities, it has become impossible “for people to weigh the costs and benefits of revealing information or permitting its use or transfer without an understanding of the potential downstream uses.”93
Data rights regulations like GDPR are designed to address a wide set of concerns relating to digital information. While political manipulation is increasingly understood to be linked to data rights and privacy issues, it has not been a driving force of policy design in this area. While data rights may help blunt the precision of weaponized ad targeting, they should not be the only policy tool in the kit.
Regulating Data-Driven Advertising Capacities in the Public Interest
When applied to political manipulation, data rights approaches are valuable to the extent that they limit the pools of data and human attention that are available to political influence operatives. Like transparency measures, data rights are meant to empower individuals to navigate digital platforms with greater awareness, purpose, and autonomy. In effect, these initiatives place the burden of upholding democratic communications norms at the nexus of the consumer/ad platform transaction. Once market conditions are properly calibrated, individuals are largely left to fend for themselves. Along similar lines, competition policy has also been suggested as a tool to remedy political manipulation and disinformation.94 The general notion is that large platforms concentrate risk for political interference and that uncompetitive markets insulate platforms from the consequences of poor data practices. Proposals for reconfigured antitrust review, data portability, and interoperability standards are recommended under the rationale that market forces will diversify the tech services landscape and give consumers more choices that enhance privacy and reduce manipulative targeting.
In addition to market-based approaches, policymakers should consider more direct forms of intervention into the data-driven advertising capacities that are most susceptible to abuse. As Hartzog notes, rather than offloading risk to consumers though transparency guidelines and consent mechanisms, “strong rules limiting collection and storage on the front end can mitigate concern about the privacy problems raised through data analytics, sharing, and exploitation.”95
The UK House of Commons final report on Disinformation and Fake News proposes “re-introducing friction into the online experience.”96 While that report focuses on slowing down user interactivity “to give people time to consider what they are writing and sharing,” we propose that incorporating friction into ad targeting systems could be an effective means to tamp down advertising-supported political manipulation. Proposals in this area generally seek to
Limit advertisers' capacities to find and target vulnerabilities
Mitigate the tendency of online political advertising toward niche targeting that can amplify social segmentation with incentives that encourage campaigns to address a broad and heterogeneous public sphere
If political advertising requires elevated codes of transparency and data rights in order to meet public interest goals, then policymakers should also consider higher public interest standards for the tools and techniques of political influence operations. Such an approach draws from and extends GDPR-style privacy regulation, which as Bradshaw et al. note, “has gaps in coverage and enforcement that limit its effectiveness to address all problems associated with social media manipulation and data-driven targeting.”97
Proposals under the category of public interest ad regulation include
Political profiling and targeting could be inhibited by strong data minimization standards such as those mandated by the GDPR. Key components of data minimization are “collecting personal data only when it is absolutely needed; deciding if some types of data should never be collected; keeping data only for as long as necessary; and limiting access to only those who truly need it.”98
Advertising profile information or certain categories therein could be subject to firm expiration dates. In such cases, “old data” would routinely be expunged from storage systems.99 This would limit advertisers' ability to develop profiles over long periods of time and could shift advertising away from intermittent communication toward more periodic contact with trusted entities. The GDPR includes rules intended to limit data storage, though they appear to give wide discretion to data processors/controllers.
Policymakers should closely scrutinize specific advertising techniques that present clear opportunities for abuse and convene multistakeholder discussions about their social benefits and costs. Policymakers should move to constrain profiling and targeting practices that are found to present unacceptable levels of political risk. Lookalike targeting,100 geotargeting,101 cross-device tracking, third-party data brokering,102 split testing, and microtargeting are among the techniques that deserve heightened regulatory review.
Policymakers should commission or undertake research to consider policies that could encourage political advertisers to forego microtargeting and address broad and heterogeneous constituencies. Potentially, such a result could come from policies that greatly restrict data collection or targeting capacities. Yet, policymakers might consider more direct routes to counteracting the economic incentives that push political advertising toward digital microtargeting. Policies could include direct requirements on ad platforms that no more than a certain percentage of their political advertising meets a well-thought criteria of microtargeting. Other policies could include additional burdens on funding used for microtargeted political advertising, such as not allowing tax-deductible nonprofit funds to be used for microtargeted ads.
Discussion of Regulating Ad Capacities in the Public Interest
Policymakers must be direct in weighing the costs and benefits of advertising profiling and targeting techniques that present clear opportunities for abuse. This is a challenging task for several reasons. Policymakers have thus far had limited access to operational details of known political manipulation campaigns. Ad profiling is highly segmented and includes targeting criteria inferred from predictive analytics. Ostensibly “nonpolitical” data can be used as proxies for political ad targeting. Finally, data-driven advertising is highly profitable, especially for the largest platforms such as Facebook and Google, which wield significant political economic power. Even as questions swirl about the democratic implications of such systems, ad platforms continue to invest, expand, and fortify political buffers through lobbying and electoral campaign donations.103
Such challenges notwithstanding, if democratic political communication depends on an open public sphere, then policymakers should deliberate whether data-driven advertising designed to facilitate individualized communication is antithetical to democratic ideals. As we have argued in this article, ad-supported political manipulation often hinges upon the capacity to carve audiences into precise segments that can be targeted in exploitive ways. We therefore suggest that policymakers take particular note of possibilities to add friction into microtargeting processes. One proposal is to enact minimum thresholds for the size of targeted audiences.104 Such thresholds could operate on sliding scales to account for particular contexts (e.g., national vs. regional election), but the basic goal is to limit the capacity for advertisers to send individualized political messages. This idea has a degree of precedent in that platforms like Facebook already voluntarily reduce the distribution of posts deemed problematic or contravening community standards.105 A policy of minimum audience thresholds is similar in that it recognizes microtargeted political messaging itself to be problematic for democratic norms.
Perhaps surprisingly, one of the boldest proposals in this area comes from the digital advertising industry itself. The IPA, a major UK advertising trade association, has officially called for “a moratorium on micro-targeted political advertising online.”106 In the words of IPA President Sarah Golding: “Politics relies on the public square—on open, collective debate. We, however, believe micro-targeted political ads circumvent this. Very small numbers of voters can be targeted with specific messages that exist online only briefly.”107 To be clear, the IPA advocates for a temporary stoppage to minimize harm until a regulatory framework can be established. Nevertheless, the thrust of their position is that microtargeted political advertising as a general category of practice carries social costs that outweigh its benefits. The IPA's assessment is that “ad technology designed for consumer products and services” cannot be permitted to be “weaponized” for political ends.108
Designing Democratic Communication Infrastructures
In the long term, a robust approach to addressing digital foreign interference and political manipulation will address these problems as matters of communication infrastructure. A primary question citizens and policymakers must tackle is how digital infrastructure—including digital ad systems—can be built to encourage more open and meaningful democratic communication and limit the potential for manipulative tactics to flourish. This challenge is partly analogous to designing a game. The best games incentivize fair competition and have built-in structures (or give rise to norms) that penalize cheating and other forms of malicious play. Like games, our digital communication systems are also designed in ways that encourage certain types of activity and not others. Social media systems have been built to maximize revenues through keeping users engaged as long as possible so they can be exposed to targeted ads. The architecture and operations of social media platforms are not designed simply by programmers' hunches about what will interest their users; rather, the major platforms have engaged in meticulous observation and testing of users to figure out just what to do to keep users engaged for as long as possible. Unfortunately, as recent digital propaganda and disinformation campaigns demonstrate, when social media are calibrated to optimize this goal, they can undermine the kinds of communication that make democracies thrive. Left unchecked, they create what Judy Estrin and Sam Gill refer to as “digital pollution” or negative externalities, including manipulation and disinformation, trolling, digital addiction, and upending the revenue models that have traditionally supported commercial journalism.109
Digital advertising infrastructure is far from the only online avenue for foreign interference and political manipulation campaigns to operate. The policy responses discussed in this article focus broadly on reining in the capacities of digital ad systems that create accessible opportunities for political weaponization. Yet, the scope of such recommendations is too narrow to address the full range of factors that interact with and compound the threats resulting from manipulation campaigns' use of digital ad systems. For a more holistic assessment, citizens, policymakers, and civil society actors need to take a wide-angle view of policymaking in contemporary media environments. This includes developing effective policies and enforcement mechanisms that help prevent manipulation campaigns from taking advantage of the peer-to-peer/publisher side of social media networks. One specific recommendation along these lines found across a number of inquiries and reports is that social media platforms should identify automatic accounts and bots as such and not allow such accounts to affect popularity rankings and curation algorithms.110
Policymakers may also play productive roles in the stewardship of media environments beyond regulations that affect the specific design of social media platforms and ad systems. First, several reports and government inquiries have called for states to allocate additional resources to study political disinformation problems, noting the necessity for states to forge durable collaborative relationships with the private sector and civil society.111 Along these lines, some called for increased oversight and regulatory powers for dedicated privacy officials.112 Second, policymakers must take steps to secure resources for independent journalism organizations that seek to build trust across diverse groups of citizens. Commercial news revenues have been undermined as digital markets have shifted ad revenue away from content producers and toward ad platforms and intermediaries.113 This decline in resources devoted to journalism has helped to create a vacuum of reliable information and trust that manipulation campaigns attempt to exploit. To address journalism's revenue crisis, Pickard, among others, has proposed that states tax digital ad revenues to fund independent, nonprofit news production.114
More broadly, concerned tech workers, public interest advocates, and researchers are searching for ways to incorporate democratic accountability and input from diverse groups into the design decisions that determine social media architecture. Along these lines, Ravel, Woolley, and Sridharan recommend that digital platforms “consult with civil rights groups on an ongoing basis and incorporate findings into product development.”115 Policymakers may find ways to incentivize such collaborations or require public and civil society input, which could lead toward building more inclusive digital environments that are less prone to political manipulation. The success of efforts to introduce democratic accountability will be strongly impacted by the extent to which such measures can override the prevailing design imperative of communications systems to maximize private profits.
Footnotes
Ravel, Woolley, and Sridharan, 6.
Alex Stamos, “The Products That Have the Most Capability for Abuse Generally Have the Least Free Expression Concerns, Which Is Convenient. The Top Two, Advertising and Recommendation Engines, Are Especially Concerning Because They Put Content in Front of People *Who Did Not Ask to See It*,” Tweet, @alexstamos (blog), February 2, 2019, https://twitter.com/alexstamos/status/1091711395991670784.
“Removing Bad Actors on Facebook | Facebook Newsroom,” Facebook Newsroom (blog), July 31, 2018, https://newsroom.fb.com/news/2018/07/removing-bad-actors-on-facebook/.
Bradshaw, Neudert, and Howard.
Wardle.
European Commission, “High Representative of the Union”; House of Commons of Canada, “Democracy under Threat”; Koulolias et al. See also the bibliography for this article.
Jack, 2–3.
As Full Fact points out, a “moral panic” around fake news could prompt overreactions that threaten free speech. Full Fact.
Shane and Blinder.
Bradshaw and Howard.
U.S. v. Internet Research Agency.
Bradshaw and Howard.
Ibid.
Ghosh and Scott, “Russia's Election Interference.”
McNair.
Kaye; Kim et al.; Valentino-DeVries.
Google, “Changing Channels,” 12.
Bey et al.
“Hard Questions: What Is Facebook Doing to Address the Challenges It Faces? | Facebook Newsroom,” accessed March 23, 2019, https://newsroom.fb.com/news/2019/02/addressing-challenges/.
These “capacities” are an analytical disentanglement of the many overlapping practices and technologies of digital advertising. See also Tufekci.
For an exploration of these technical means that goes into more detail than we provide here, see Nadler, Crain, and Donovan.
United States Federal Trade Commission.
Angwin, Varner, and Tobin; Lumb; Dean.
Das and Kramer.
Brave RTB complaint; Engelhardt and Narayanan.
Davies.
Schechner and Secada.
Graves and Matz.
Digital records of behavior expose personal traits. Kosinski, Stillwell, and Graepel.
Bey et al., 82–83.
Angwin, Varner, and Tobin; Lumb.
Angwin and Parris Jr.; Spiecer et al.
Matz and Netzter; Sandra Matz et al.
Reilly.
Morris.
Enwemeka.
Ghosh and Scott, “Digital Deceit I.”
HubSpot. What is deep learning? https://blog.hubspot.com/marketing/what-is-deep-learning
Kaptein et al.; Berkovsky, Kaptein, and Zancanaro, 18.
AdEspresso. Optimizing your Facebook campaign objective. https://adespresso.com/guides/facebook-ads-optimization/campaign-objective/
Ghosh and Scott, “Digital Deceit I.”
“Frequently Asked Questions (FAQ)—AdChoices | Choix de Pub,” Digital Advertising Alliance of Canada (blog), accessed February 25, 2019, https://youradchoices.ca/faq/.
Ariely.
Calo; Shaw.
PHD Media.
Some of the landmark contributions to this line of critique include, Williamson; Packard; Ewen; McClintock.
Zuboff; Ghosh and Scott, “Digital Deceit I.”
Huddy, 738.
Riek, Mania, and Gaertner.
Howard, Ganesh, and Liotsiou; DiResta et al.
It is difficult to discern exactly how much of the traffic of IRA Facebook and Instagram accounts can be traced specifically to advertising. As the New News research points out, “Approximately two dozen Facebook and Instagram accounts achieved audience sizes over 100,000 followers; however, no data was provided to indicate what percentage of followers came from ad conversions, engagement with organic content, or suggestions from the recommendation engine.” DiResta et al., 38.
Ibid., 80–81.
Howard, Ganesh, and Liotsiou, 3.
Ibid.
DiResta et al., 99.
Roese and Sande; Fridkin and Kenney.
For a further exploration, see Jamieson.
Jones, Libert, and Tynski; Vaidhyanathan.
Beckett.
House of Commons of Canada, “Democracy under Threat,” 34.
Allbright.
Penzenstadler, Heath, and Guynn.
Facebook Business.
Google. “Political Content.”
House of Commons of Canada, “Disinformation and ‘Fake News’: Interim Report,” 37.
Ravel, Woolley, and Sridharan.
Ibid.
Turow.
Ghosh and Scott, “Digital Deceit II,” 14.
Kahneman.
Turton.
Information Commissioner's Office; Bradshaw, Neudert, and Howard.
Ghosh and Scott, “Digital Deceit II”; Greenspon and Owen.
Ghosh and Scott, “Digital Deceit II,” 22.
General Data Protection Regulation Article 4 line 11.
Woodrow Hartzog, “Policy Principles for a Federal Data Privacy Framework in the United States,” § U.S. Senate Committee on Commerce, Science and Transportation (2019).
General Data Protection Regulation Article 4 line 11.
Dillet.
Ravel, Woolley, and Sridharan, 14.
Chester and Montgomery.
Rothchild.
Centre for International Governance Innovation.
Sample was 6,387 adults in France, Germany, the United Kingdom, and the United States. RSA Security
House of Commons of Canada. “Disinformation and ‘Fake News’: Final Report.”
Bradshaw, Neudert, and Howard; McCann and Hall.
In addition to seeking blanket consent from its users, Facebook has also “bundled” consent to advertising within its more general terms of service provision. At the time of this writing, privacy regulators in several EU countries are investigating this issue as it pertains to Facebook and other major ad platforms.
To the best of our knowledge, the GDPR is not clear on whether consent is required to be obtained by advertisers that use the built-in targeting capacities of an ad platform like Facebook.
“Guidelines on Automated Individual Decision-Making and Profiling for the Purposes of Regulation 2016/679” (ARTICLE 29 DATA PROTECTION WORKING PARTY, October 3, 2017), https://ec.europa.eu/newsroom/article29/item-detail.cfm?item_id=612053.
Hartzog; Solove; Rothchild.
Dillet; Hill; Tiku.
Oremus.
Solove, 1880–81.
Ibid., 1881.
Ghosh and Scott, “Digital Deceit II.”
Hartzog, Policy principles for a federal data privacy framework in the United States.
House of Commons of Canada. “Disinformation and ‘Fake News’: Final Report.”
Bradshaw, Neudert, and Howard.
“Why DRN?—Digital Rights Now,” accessed March 5, 2019, https://digitalrightsnow.ca/why-drn/.
HTTP cookie protocols already include expiration functionality.
House of Commons of Canada. “Disinformation and ‘Fake News’: Interim Report.”
Ghosh and Scott, “Digital Deceit I.”
McCann and Hall.
Solon and Siddiqui.
House of Commons of Canada, “Disinformation and ‘Fake News’: Interim Report.”
“The Three-Part Recipe for Cleaning up Your News Feed | Facebook Newsroom,” accessed March 5, 2019, https://newsroom.fb.com/news/2018/05/inside-feed-reduce-remove-inform/.
“IPA to Call for Moratorium on Micro-Targeted Political Ads Online,” accessed March 5, 2019, https://ipa.co.uk/news/ipa-to-call-for-moratorium-on-micro-targeted-political-ads-online#.
Ibid.
Singer.
Estrin and Gill.
Ghosh and Scott, “Digital Deceit II.”
House of Commons of Canada, “Democracy under Threat”; “Disinformation and ‘Fake News’: Interim Report.” 68; Koulolias et al., 6.
Greenspon and Owen, 27; House of Commons of Canada, “Democracy under Threat,” 26; “Disinformation and ‘Fake News’: Interim Report.”
Nielsen and Ganter.
Pickard, “Break Facebook's Power”; “The Violence of the Market.”
Ravel, Woolley, and Sridharan, 22.
Bibliography
COURT CASE
U.S. v. Internet Research Agency, 18 U.S.C. §§ 2, 371, 1349, 1028A (U.S. Dist., D.C., 2018).