Abstract
The word “algorithm” is best understood as a generic term for automated decision-making. Algorithms can be coded by humans or they can become self-taught through machine learning. Cultural goods and news increasingly pass through information intermediaries known as platforms that rely on algorithms to filter, rank, sort, classify, and promote information. Algorithmic content recommendation acts as an important and increasingly contentious gatekeeper. Numerous controversies around the nature of content being recommended—from disturbing children's videos to conspiracies and political misinformation—have undermined confidence in the neutrality of these systems. Amid a generational challenge for media policy, algorithmic accountability has emerged as one area of regulatory innovation. Algorithmic accountability seeks to explain automated decision-making, ultimately locating responsibility and improving the overall system. This article focuses on the technical, systemic issues related to algorithmic accountability, highlighting that deployment matters as much as development when explaining algorithmic outcomes. After outlining the challenges faced by those seeking to enact algorithmic accountability, we conclude by comparing some emerging approaches to addressing cultural discoverability by different international policymakers.
Algorithms can act as instruments of cultural policy. Though debated, this statement reflects a growing acknowledgment that algorithms influence cultural expression and access to cultural content. By way of introduction, the UN Convention on the Protection and Promotion of the Diversity of Cultural Expression is a key document of global cultural policy that seeks “to protect and promote the diversity of cultural expressions” and “create the conditions for cultures to flourish and to freely interact in a mutually beneficial manner.”1 UNESCO's 2018 report on the Convention notes that information and technology firms do change the conditions of cultural production and that “Google, Facebook, Amazon and other large platforms are not simply ‘online intermediaries,’ they are data companies and, as such, make every possible effort to safeguard and fully exploit their primary input.”2 In rejecting the label intermediaries, the report emphasizes the transformative power of data-optimized algorithms for the future of cultural expression and access to culture.
The United Nations (UN) is not alone in recognizing the far-reaching implications of algorithms as policy instruments in cultural settings. Natascha Just and Michael Latzer's claim that “algorithmic selection on the Internet tends to shape individuals' realities and consequently social order.”3 Our past research has found that algorithms play an important role in online content discoverability.4 Our discoverability framework draws on a growing literature about the modest but important impact of algorithms that classify, sort, filter, rank, and recommend content online.5 These concerns first gained prominence with the rise of search engines.6 As these searches became more popular—indeed, fundamental to navigating the Internet—critics noted how search algorithms increased the salience of some content and relegated other content behind the screen.7 And as search becomes more personalized, algorithms became a flashpoint for concerns about polarization and filter bubbles. Journalism scholars emphasized the ways algorithms influence the salience of certain types of news based on recommender algorithms' estimations of what is relevant to users.8 Similarly, algorithms for classifying and valuing online behaviors may amplify extremist content and views through personalized recommendations.9
While there is a large literature evaluating policy instruments,10 there are few frameworks to consider algorithms as policy instruments or forms of regulation, especially in cultural contexts.11 Our article proposes a framework to evaluate the barriers against holding algorithms publicly accountable as instruments of cultural policy. Building on cultural studies' use of circuits and moments to interpret culture,12 we identify three moments (input, code, and context) to evaluate how different algorithms act as part of media policy in cultural contexts. These moments do not simply offer the chance to make algorithmic regulation transparent, but provide opportunities to situate algorithms within larger systems of power and structural inequity.
Our framework contributes to the growing scholarship on algorithmic accountability that ultimately seeks to reveal the systems that code algorithms and create institutions of public, democratic governance for these technical forms of regulation. This article follows in the wake of Mike Ananny and Kate Crawford's perceptive insight that calls for transparency or open code “come at the cost of a deeper engagement with the material and ideological realities of contemporary computation.”13 We frame inputs, codes, and contexts as moments of algorithmic agency in cultural and media policy that correspond to regulatory issues that will likely be addressed by an emerging regulatory agenda (at least in some parts of the world), which we discuss in the conclusion.
Algorithms, Content, and Discontent
An algorithm can be broadly defined as a set of instructions to solve a problem or perform a task. In popular discussions of digital media, “algorithm” has become shorthand for an operating procedure that relies on data and software.14 These procedures can be written by human coders, or engineers can apply an approach called machine learning (ML) where the algorithms “learn” how to carry out tasks under various levels of human oversight.
Algorithms have been applied to all areas of information policy: information creation, flows, use, and, of particular importance to us, processing.15 Algorithms:
Create content such as retrospective year-in-review videos on Facebook and, increasingly, through advanced ML libraries like OpenAI's GPT-2 or deepfakes
Manage flows through active recommendations16 and content caching as well as negative shadow bans that limit the appearances of certain messages on social media17
Mediate interaction with information often through the array of signals, for example, likes and comments, that inform information flows and content discoverability
Rank and filter information in ways that create incentives and conditions of interaction similar to markets or system engineering18 that creators must learn and “game” to succeed online19
Together, algorithms work in collaboration to become infrastructures, platforms, or hybrids of the two that increasingly coordinate social, economic, and cultural activity.20
Applications of algorithms in finance, hiring, price manipulation, and risk assessment prompted wider reflection about the nature of automated decision-making across government and private industry.21 Regulators and researchers have begun asking questions about anticompetitiveness and market share among the dominant companies in the field of digital content and whether algorithms may have biases that encourage concentration.22 As Hindman notes in his deft analysis of the digital economy, algorithmic content recommendation favors larger firms with bigger catalogs of information, creating powerful lock-in effects that diminish the competitiveness of smaller firms. Policy has shifted as a result. In April 2018, the European Union (EU) proposed a regulation “promoting fairness and transparency for business users of online intermediation services” as part of their Digital Single Market strategy.
Algorithms automate media and cultural policy. Though platforms or technology companies may not accept this statement (at least when appearing before regulators), it reflects growing consensus in the literature. When applied to cultural content, algorithms become technical means of calculating complex matters of reception and interpretation, such as relevance, taste, enjoyment, and personality.23 Natali Helberger has made crucial contributions in this area, drawing connections between algorithmic recommendation and long-established goals of media policy, such as exposure diversity, and proposing conceptual frameworks for policy regarding recommender systems, including theories of democracy.24
There are four major approaches to information processing with direct application to media and cultural policy,25 while new advances suggest the potential for radical change in the recommendation landscape through ML. These approaches are
Content-based: This kind of recommender seeks to match a user's taste profile to specific items that the system guesses the user will like. Individual pieces of content are tagged and categorized (often in multiple ways), and once users show interest in a specific tag or category, they are directed to other items in that same grouping.
Knowledge-based: These recommenders are usually implemented on platforms where user behavior is infrequent or past behavior is a poor predictor. Given a lack of existing information, these systems solicit guidance directly from the user to establish preferences. A drawback for platform operators is that knowledge-based and content-based approaches require them to obtain and manage a significant amount of information about the items in their catalog.
Collaborative: Often called collaborative filtering, this system avoids the dilemma of needing catalog information by matching users to other users rather than to items. This method was popularized by Amazon and their well-known “customers who bought this item also bought” approach to making suggestions.26 Collaborative filtering leverages the data contained within a mass of individual profiles to find shared interests between users and then looks for the “missing” items. Importantly, the system does not need to know that these users share an interest in a specific genre or creator, it only needs to analyze their behavior for commonalities. Amazon's recommender system has been widely influential and was adopted or adapted by other major platforms, including Netflix and YouTube.
Context-aware: As accurate recommendations have come to be seen as a source of value for platforms, other advances have shifted the terrain. The widespread adoption of mobile devices has made possible the development of context-aware recommendations.27 While contextual information such as the time or one's IP address were already detectable through web browsing, mobile devices have allowed finer-grained data such as one's exact geographic location to be included in recommender systems' calculations.
ML: ML is increasingly being used in recommender systems. ML “uses computers to simulate human learning and allows computers to identify and acquire knowledge from the real world, and improve performance of some tasks based on this new knowledge.”28 ML enables recommender systems to “learn” from their mistakes, independently gauging the success of their recommendations and adapting rapidly in response to implicit or explicit feedback. Advances in ML could allow recommender systems to devise new ways to calculate recommendations, uncovering patterns in user behavior that human engineers haven't considered looking for yet.
These different approaches to algorithmic design have consequences when processing cultural content. For example, ML could have a major impact on content-based recommendation as an algorithm could be trained to analyze, sort, and classify content, including new arrivals to the catalog, as well as look for patterns within and across it. Previously only humans could judge content in this way, but if machines can, metaphorically speaking, watch every single item in Netflix's database and index them in unprecedented ways, content-based recommendation could move in unforeseeable directions.
Another promising possibility of ML is that popularity could matter less. For example, if a product has never been purchased on Amazon or a YouTube video has single-digit views, it is unlikely that a collaborative system will recommend these items to anyone (unless the producer pays for more direct promotion). ML algorithms could solve this outlier problem—provided they're not programmed to prioritize popularity—and recommend things that lack visibility or an existing fan base. New avenues of discovery could open. Let loose on a streaming platform's archive of musical recordings and data about our listening habits, an ML algorithm designed to “learn” about musical attributes could discern patterns in our consumption histories (favored chords, rhythms, lyrical motifs, etc.) that we are unable to identify ourselves without training and/or time. With these musicological, rather than social, patterns in mind, listeners might be recommended songs, artists, or genres it would never have occurred to them to try, or works that struggle to be discovered due to lack of promotion.
Alternatively, these characteristics may be poor predictors of enjoyment, and an algorithmic recommender may never be able to calculate why someone prefers Carly Rae Jepsen to Justin Bieber. Despite popular enthusiasm for ML, software will likely continue to struggle with the broader and deeper social and affective dimensions of engaging with culture—the unquantifiable influence of feelings, friendship, and fandom. While our platforms and devices will continue to collect ever more and ever more detailed data about our behavior, the concept of “context-aware recommendations” masks the fact that these algorithms make judgments that translate into code the irreducibly complex nature of what makes up a “context” when it comes to our experiences of culture.
From these general concerns earlier, we identify a few specific concerns about the use of algorithms in cultural contexts.
Solipsism or Serendipity?
Algorithmic recommendations may increase social isolation and diminish public culture by restricting the salience of cultural expressions. The ubiquity of this algorithmic solipsism leads to polarized and self-interested understandings of the world. Critics who hold this view argue that personalized algorithms trap users in partisan information bubbles when it comes to the news, while they provide a restrictive diet of homogenous content when it comes to culture.29 Some scholars have claimed that these “filter bubbles” can amplify extremist points of view. Scholar and writer Zeynep Tufekci has warned about the possibility of algorithmic radicalization, where automated recommendations insidiously push users toward the fringes of political discourse.30
However, the filter bubble hypothesis has been disputed and its effects assailed as exaggerated.31 For example, journalism scholars Fletcher and Nielsen recently found that using Google to search for news stories leads to what they call “automated serendipity,”32 where search engine users are in fact exposed to a wider range of sources reflecting a diverse range of political positions.
Though the filter bubble thesis has been too fervently embraced, its appeal endures due to a less contentious claim: there is a lack of transparency around algorithmic inputs and code. For example, sociologist Francesca Tripodi observed that some people primarily motivated to use Google to “expand out from their ideological positions” remain “unaware that their queries will simply return information that reaffirms their beliefs depending on what phrase they choose to type into the search engine.”33 In other words, the outcomes of Google's algorithmic sorting are so difficult to anticipate that, as seen in Fletcher and Nielsen's study, some users read news sources they might have otherwise ignored, while, as in Tripodi's fieldwork, other users ideologically prefilter their search results even when explicitly trying to do the opposite.
Revenue Demands
A major problem with algorithmic recommendation is that the business models of content discovery platforms can supply the underlying logic rather than considerations of what is most beneficial to users or communities. The drive to demonstrate growth to shareholders can incentivize companies to develop algorithms focused primarily on keeping users glued to their screens and exposed to advertising. Or, in the case of platforms like Netflix that produce their own content, recommendations can be skewed to emphasize their own products above other, possibly more relevant results.34
Consider the effects of news aggregators programmed to optimize their filters to maximize user engagement. By valorizing attention-grabbing content that provokes an intense response, are we replacing traditional journalistic norms like balance and objectivity (however unrealized those norms may be in legacy media)? The purpose of ranking and recommending, broadly speaking, is to please readers and keep them immersed in the flow of content. What happens to democracy when transposing this model to journalism? For example, news stories about American politics might engage readers outside of the United States, but does promoting these stories over less sensational local news leave them uninformed? Turning to cultural productions, what are the pitfalls of making content “sticky?” Who wins and who loses when engagement matters most?
User Data and Privacy
Considering how platforms measure engagement leads us to privacy concerns. Most of the established means for measuring engagement online are relatively simple, such as views of a webpage, shares on social media, or time spent reading, watching, or listening. These inputs already require user surveillance, but recommender system engineers continue to search for ways to dig deeper. In an article reviewing Amazon's groundbreaking collaborative filtering algorithm, two of its creators, Brent Smith and Greg Linden, speculate about the future of content recommendation; they claim that “discovery should be like talking with a friend who knows you, knows what you like, works with you at every step, and anticipates your needs.”35 Would digital assistants achieving this level of intimate attentiveness be in the best interests of Amazon's shareholders or its customers? Should online retailers or content platforms gain deep insight into our inner lives simply to give us better recommendations? Is the pursuit of “better” recommendations merely cover for more lucrative user data mining?
The Limits of Artificial Intelligence
The belief that in interacting with recommender systems we are dealing with a form of intelligence can be misleading. Most people have experienced clicking on a piece of content and seeing their recommendations turn alien or irrational—absentmindedly click on a viral video and now YouTube's algorithm thinks you're obsessed with kittens. Recommender systems cannot make the kind of sensitive, multidimensional judgments that humans can, which is why platforms use an ambiguous, neutral term like “engagement” to define their goal. These systems can never understand our personal, social, and cultural context fully due to their reliance on behavioristic proxies (clicking a link) to guess at correlated attributes (loves kittens).
For example, YouTube's deep-learning algorithm cannot tell when you watched a video because your friends were mocking it, you're researching conspiracy theories for school, you fell asleep with autoplay on, or your sister borrowed your phone—it only registers that you engaged with the video. As former data journalist and software developer Meredith Broussard puts it, “we talk about computers as being able to do anything, and that's just rhetoric because ultimately they're machines, and what they do is they compute, they calculate, and so anything you can turn into math, a computer can do.”36 Platforms count and measure signals to deduce our experiences of or feelings about content, but not every input is as clear as clicking the thumbs-up button and not all feelings and experiences can be translated into math.
Artificially intelligent recommender systems can also behave in unpredictable ways, and fully automating online discoverability could have aberrant results. For example, YouTube has been criticized for recommending disturbing content to children, or, in Wired UK's vivid description, “churning out blood, suicide and cannibalism.”37 The ML-powered algorithm appears to have judged the success of its recommendations entirely on whether users engaged with the content (i.e., how much time they spent watching it, which for very young children might be an especially meaningless metric). The culprit behind this phenomenon remains a mystery, illustrating some of the obstacles to algorithmic accountability that we will discuss later, such as the difficulty of tracing problems to their sources and these systems' vulnerability to manipulation by humans and bots.
Even in more quotidian situations, users can be perplexed or made uneasy by algorithms, prompting them to conjure theories to explain why the algorithm failed or misunderstood certain choices.38 In situations where users interact with algorithmic filters and recommendations over time, some may come to accept algorithmic categories as a scientific determination of their taste or interests.39 As platforms filter content in and out of users' feeds, one's “personal” taste becomes something developed in collaboration with algorithmic recommendation.
An Information Vacuum
Individual users aren't the only ones confronting a dearth of information about algorithmic outcomes on platforms. Indeed, even before considering the opacity of algorithmic processes, we currently lack ways to verify platforms' claims about something as simple as a view or play count. The problem is that most data about what content attracts attention comes from the platforms themselves or from affiliated data brokers. Nielsen advertising, for example, collects data supplied by Facebook, YouTube, and Hulu when compiling its online viewership statistics.40 That the same companies that generate revenue by selling advertising measure their own viewership is problematic, causing ongoing concerns about overreporting41 or misleading audience metrics.42 Meanwhile, some platforms that don't rely on advertising, such as Netflix, refuse to disclose any audience data at all or do so only to boost publicity.
In the digital advertising industry, similar concerns have led to independent audits of ad impressions through firms like the Media Ratings Council. A shift to external review has not yet happened for other online metrics even though there is historic precedent. Audience estimates for television and radio were established by third parties like Nielsen or polling firms which supplemented the data disclosed by the cultural industries. In the platform era, efforts to develop independent sources of data face the challenge of potentially violating the platforms' terms of service or privacy agreements, and, furthermore, they require developing complex methods to investigate highly unstable objects of research.43
Algorithms as Regulation, Optimization as Policy
Des Freedman's distinction between media policy and media regulation helps us to understand how algorithms may act as regulatory instruments. If media regulation refers to “tools that are deployed on the media to achieve established policy goals,”44 then algorithms can be considered forms of regulation that calculate how to process cultural content.45 Algorithms are optimized, which means they are evaluated in comparison to theoretical models, hypothetical ideal states, or code considered state of the art. The optimal encodes how an algorithm processes information.46 If “media policy refers to the development of goals and norms leading to the creation of instruments that are designed to shape the structure and behavior of media systems,”47 then an algorithm's optimality can be considered a kind of cultural policy, but, crucially, one that is hidden from scrutiny.
The line between the policy and the instrument is blurred in algorithmic decision-making, much as it is in public policy. As Pierre Lascoumes and Patrick Le Galès argue, “instruments at work are not purely technical: they produce specific effects, independently of their stated objectives (the aims ascribed to them), and they structure public policy according to their own logic.”48 The policy informs the instrument and the instrument informs the policy. The decision to use algorithms as tools of governance treats the problems of media systems as technical challenges capable of being solved by engineers, which, in turn, reflects a worldview circumscribed by the need to create rule-based order and manage cultural expression through code. We can see this logic at work in large platform companies' preference to manage their systems through algorithms rather than hire more staff, or, more accurately, to publicly proclaim the efficacy of automated regulation while hiding their reliance on a shadow workforce of human contractors and outsourced employees.49
We identify three moments that provide opportunities to consider the policy implications of algorithms:
Inputs, signals, ground truths, and training data that inform the operations of an algorithmic system
Written code, black-boxed algorithms, libraries, and other technical agents operating in tandem in a system
Contexts where the expected operation of a system meets its real-world application
These moments occur simultaneously, but distinguishing them helps clarify which agents make decisions, who or what decides whether these decisions are high quality, and how these practices might be made more accountable.
Inputs may unintentionally make policy decisions by ignoring bias. For instance, ML usually requires massive datasets so the algorithm can discover meaningful patterns, but those patterns may end up reflecting systemic inequity contained within the data. For example, an ML algorithm trained on a corpus of English-language texts associated the word “genius” with men and “model” with women.50 Is this discriminatory outcome the fault of the algorithm, the developers, or a dataset that reflected a history of sexist assumptions? In other words, how a platform's engineers program its recommender system establishes the system's values and some will inevitably be excluded. This concealment occurs through how algorithms are optimized and through the choice to use algorithms to manage cultural expression in the first place.
Thinking of algorithms as policy instruments implies reconsidering coding as policymaking. Coding can conceal making cultural policy choices as solving purely technical problems. Communications scholar Mike Ananny argues that algorithms are presented as objective science, but the ways they process information “raise ethical concerns to the extent that they signal certainty, discourage alternative explorations, and create coherence among disparate objects—categorically narrowing the set of socially acceptable answers to the question of what ought to be done.”51 Algorithms also black box policy by translating its norms and rules into inscrutable, often proprietary systems that cannot be publicly scrutinized due to intellectual property or security concerns.
Algorithmic regulation can vary in its efficacy depending on the content. If inputs and code, as we discuss in the following, create enough structural barriers then algorithms may not be effective, or worse, may perpetuate the problem. The idea that we can “fix bias” by better optimizing algorithmic systems rests on the assumption that every social problem has a technical solution and evades urgent questions about how technologies like facial recognition are used to perpetuate injustice irrespective of the efficacy of their algorithms.52 As Anna Lauren Hoffman has argued, efforts to address data-driven discrimination often fail to do so in ways that reckon with the complexity of social hierarchies and intersecting forms of discrimination; likewise, positive forms of discrimination that advantage some people over others are rarely considered.53
To think about this more concretely, consider the question of whether a content recommendation system should factor race or gender into its calculations (rather than pretend to ignore it). As seen in activist campaigns like #OscarsSoWhite, the lack of representation and opportunity for actors, directors, writers, and other workers from historically marginalized communities is a major concern within most cultural industries as well as among people who watch its creations. As algorithmic recommendations grow in influence, their role in fostering inequity must be scrutinized. Should streaming video platforms or news recommendation systems change their code to promote content that helps to increase visibility for underrepresented people and stories?54 Or could inputting this kind of data have harmful consequences? Whatever the case, neither the data nor the algorithms are neutral. The platforms that design and implement these algorithms must acknowledge systemic inequities and develop remedies, lest they reproduce and amplify injustice in the status quo.55
A Framework for Evaluating Algorithmic Policy Instruments
In the following section, we elaborate our framework to consider how algorithms, as means of regulation, create barriers to accountable cultural policy. Table 1 summarizes this discussion, including some of the examples we mention in the following. At the operational level, accountability requires agents “to answer for their work, signals esteem for high quality work, and encourages diligent, responsible practices.”56 Achieving accountability leads to systems that perform better to higher standards. Governments and other policymaking bodies are, at best, at the beginning stages of creating regulatory frameworks to address, let alone enforce, algorithmic accountability, and strategies vary considerably across national borders, suggesting that comparative research is necessary.
Moment . | Obstacles . | Examples . |
---|---|---|
Inputs | Proprietary data Systemic inequity and prejudice in data Personalization Localization | Boston's Street Bump app; comparing Google results; Facebook's trending stories |
Code | Black boxed Traceability Instability Unpredictability Lack of diversity | Spotify's Discover Weekly; children's YouTube recommendations; walkouts by Google employees |
Context | Anticipated use or goals Changing media habits Vertical integration Manipulation | Fan campaigns; Microsoft's chatbot Tay; Netflix Originals; ubiquitous media consumption |
Moment . | Obstacles . | Examples . |
---|---|---|
Inputs | Proprietary data Systemic inequity and prejudice in data Personalization Localization | Boston's Street Bump app; comparing Google results; Facebook's trending stories |
Code | Black boxed Traceability Instability Unpredictability Lack of diversity | Spotify's Discover Weekly; children's YouTube recommendations; walkouts by Google employees |
Context | Anticipated use or goals Changing media habits Vertical integration Manipulation | Fan campaigns; Microsoft's chatbot Tay; Netflix Originals; ubiquitous media consumption |
Inputs
Proprietary Data
The data used in algorithmic decision-making is often confidential and not easily accessed by public institutions. Data is a known strategic asset, giving a competitive advantage to companies with direct access and the ability to aggregate data.57 User data is a particularly valuable form of capital, as recent revelations about Facebook giving special access to preferred partners has made evident.58 The critical importance of data to ML further increases its value and motivates companies to restrict access.
Systemic Inequity and Prejudice in Data
While often framed as politically neutral, data is a representation of an inequitable world. If used naively, data will reflect and reproduce established injustices. By looking at Google Search results, Safiya Umoja Noble exposes the uneven, and largely pornographic, political economy of digitized racialized bodies. Search, Noble argues, obfuscates deeply inequitable conditions of production as an objective reflection of reality.59 Comparatively, AI Now cofounder Kate Crawford has written about Boston's use of the Street Bump app, which used smartphone data to automatically detect road conditions, pinpointing potholes in need of repair.60 However, with smartphone adoption as low as 16 percent in the city at the time and inevitably less accessible to lower income communities, the app abetted discrimination in the allocation of vital municipal services. These concerns matter in cultural contexts. What prepackaged analytics developed for advertising and marketing might find their way into cultural policy through algorithms and data? In light of concerns about social media analytics discussed earlier, how might these inputs have bias or miss important cultural indicators?61
Personalization
Algorithmic filters are almost always personalized through one of the techniques discussed earlier, and so all perspectives on a given system are just that: one perspective. No single outcome can be assumed to be indicative of the overall system, and it is impossible for even an expert user to deduce how the algorithm functions in the abstract based on individual results. When sharing anecdotes about political bias in Google news results, for example, one must keep in mind that not every user will see the same results for the same search query.
Localization
As with the problem of personalization, context-aware algorithms vary their outcomes according to geography, particularly the different ways it can be signaled by localization, user preference, mobile GPS location, or IP address. For example, Facebook in Canada used to restrict its “trending stories” feature to certain linguistic regions, showing English-speaking Canadians the list without doing the same for French-speaking Canadians. More broadly, do systems developed in English perform differently in other languages?
Code
Black Boxes
Algorithmic systems are often referred to as “black boxes” because users cannot see how they arrive at their outcomes.62 With the advent of ML, we face a situation where these decisions are not only difficult for users to understand but for the algorithm's developer as well. Black boxing is often deliberate. Knowledge of the operation of Google's search or Spotify's Discover Weekly algorithms represents a competitive advantage for these companies and will not be readily disclosed. The Criminal Code (for acts such as computer mischief) and new trade law, such as the pending United States–Mexico–Canada Agreement, may restrict access to source code, establishing formidable legal barriers to oversight. However, as Jenna Burrell has pointed out, algorithmic opacity can also stem from unintentional factors, such as the fact that “reading” code generally requires special training or that ML algorithms by definition change during use—simply “opening” the box, so to speak, may not be illuminating.63
Traceability
Algorithms function as components in distributed systems where responsibilities are shared. Recommendations and other forms of filtering do not result from any one part but from the interactions among many. When algorithmic processes lead to harmful outcomes, traceability—discovering the source of the failure, assigning responsibility for it, and punishing or at least educating those responsible—becomes a key ethical concern.64 As in the case of children's YouTube recommendations, tracing problems to their source can be particularly daunting in ML systems, where the developer, the training data, or malicious users could all be the origin.
Instability
Most platforms constantly develop and release updates to their code.65 Cloud computing enables these changes to occur at scale and often without indication to the user. As a result, all observations of algorithmic systems must be sensitive to the time and state of their unstable object of research. Instability necessitates continual monitoring to detect changes.
Unpredictability
Algorithmic systems, especially ML and multiagent ones, exhibit emergent behavior that cannot always be anticipated. Software bugs and complexity introduce further unpredictability that compounds uncertainty about intent. Devising an algorithm to solve one problem may lead to it inadvertently creating new ones, such as copyright enforcement or antipiracy algorithms blocking users or content they weren't intended to police.66
Lack of Diversity in Development Teams
Algorithmic processes may fail or further marginalize people and communities if they perpetuate the status quo. A significant obstacle to ameliorating this problem is a lack of diversity in the technology industry, where 76 percent of technical jobs are held by men and the overall workforce is 95 percent white,67 meaning that underrepresented voices go unheard and important perspectives are excluded.
Context
Anticipated Use or Goals
What goals do algorithmic systems hold? What optimal behaviors are they trying to nudge users toward? How do these automated systems determine whether they have succeeded? These questions concern optimization, and these principles are usually neither public nor knowable through digital methods. Algorithmic processes can seem enigmatic or uncanny to users because they don't know what optimal condition the algorithm is trying to achieve or maintain.68 Furthermore, ensuring that algorithmic systems are functioning optimally requires testing, including gauging the effects of delivering suboptimal services, which can further undermine users' and creators' trust as well as reinforce already existing inequities.69
Changing Media Habits
Assessing the strength of the effects of new technologies like algorithmic filters and recommendations requires asking fundamental questions about media use: what media do people consume? When, how, and why? These traditional concerns remain relevant as media consumption occurs in new contexts. While the postwork evening hours are still a peak time for Netflix usage,70 online media consumption can take place at any time and most Americans check their phones multiple times a day.71 These shifting patterns of media use mean that tracking algorithmic influence is more difficult than, for example, estimating how many people are exposed to a television broadcast.
Vertical Integration
When platform companies produce or sell their own content, their recommendation algorithms' goals may not be in harmony with their users' interests and content creators may struggle to reach an audience.72 Vertical integration introduces potential conflict in the algorithm's function, overriding user benefits for corporate ones, representing an especially problematic instance of the lack of transparency around an automated system's goals. Amazon, for example, has been accused of manipulating their pricing algorithm to mislead users into thinking Amazon's own products were the cheapest option when they were not.73
Manipulation
How did Korean pop group BTS debut at the top of the US music charts? The group's global fan base coordinated a campaign to stream BTS's songs on shared American Spotify accounts.74 How music charts should gauge popularity in the postpurchase, streaming era is an unsettled issue,75 and BTS's manufactured triumph illustrates a flaw in automated systems: they are susceptible to adversarial users and coordinated manipulation. Furthermore, the value of data—particularly the data used to train ML algorithms—diminishes quickly if poisoned by trolls, manipulated by fans, or influenced by new, covert forms of marketing. Lay theories about “the wisdom of the crowd” may obfuscate the emergence of a new class of gatekeepers—highly active users who know how to influence algorithmic systems.76 As new methods of manipulation emerge, algorithmic accountability must be capable of addressing systems in adverse information environments.
Critical Questions for Algorithmic Accountability
We can summarize the preceding issues in the form of three questions and related subquestions:
Are inputs to algorithmic systems justified?
- a.
Who decides what does and does not count? How might data reproduce systemic inequity and historical prejudices? Who owns or has privileged access to data? How do datasets differ from each other and from other sources of cultural knowledge?
- a.
How do algorithms function as forms of regulation?
- a.
To what extent should automation be involved in cultural or journalistic recommendations? Does algorithmic regulation enhance or obfuscate policy objectives? How intentional is algorithmic optimization as an act of policymaking?
- a.
Are algorithms an appropriate instrument in the known situation or cultural context?
- a.
Is the situation understood well enough to estimate the impact of using algorithmic regulation?77 What happens when users behave in unexpected ways? How to resolve conflicting user expectations? Which users take precedence in deciding optimal performance?
- a.
For future research, these questions form a framework to analyze different formulations of algorithmic accountability policy. Policy should address these issues when aiming for improved accountability for inputs, code, and contexts. The next stage for academic research will entail comparative research into how these challenges are understood and confronted by policymakers, as we discuss briefly by way of conclusion. With this framework, we provide a toolkit to understand the intersection between algorithms as instruments and the policies, norms, and goals that optimize them.
Toward Comparative Research on Global Algorithmic Accountability in Cultural Policy
Thus far, we have identified questions for specific implementations—the next steps require situating algorithmic regulation within larger systems of media governance. As Des Freedman explains, media governance is the “sum total of mechanisms … that aim to organize media systems according to the resolution of media policy debates.”78 What are the larger governance processes shaping algorithmic accountability? Natali Helberger has described how the diverse values guiding competing theories of democracy could optimize news recommendation algorithms differently.79 In a similar vein, we consider how existing systems of media governance rooted in distinct policy debates configure algorithmic accountability differently.
Based on our review of key platform governance documents assembled by Dwayne Winseck and Manuel Puppis (consisting of 52 different platform regulation inquiries, reviews, and proceedings from across the globe),80 we can begin to identify three framings for the governance of algorithmic regulation in culture and media policy.
The first frame consists of cultural protectionism. In countries like Canada, algorithmic accountability is closely associated with the discoverability of cultural content and how algorithmic recommendation and search affect access to local culture. Cultural institutions have expressed concern that algorithms may prevent Canadians from easily finding and engaging with Canadian content. Relatedly, concerns about discoverability have also reflected long-standing fears about the oversized role that foreign companies play in the Canadian media market.81 The Canadian approach to discoverability resonates with concerns in Australia and New Zealand, where discoverability has also been perceived as a problem of the accessibility and prominence of local content.82
However, cultural policies are not geared solely toward enriching the lives of local consumers. A second frame presents aspects of algorithmic accountability as critical to participation in the global digital economy. The EU has been the most active in these larger matters as they have begun to build a strong regulatory agenda, although India has also sought to regulate vertical integration in online realtors like Walmart and Amazon.83 This framing situates algorithms as central to the economy and other critical democratic systems that require greater accountability to protect human rights and to ensure free and fair markets as well as the success of domestic industries.
The EU's General Data Protection Regulation (GDPR), passed in 2018, may be the clearest expression of the prominence of digital systems in the regulatory agenda. The GDPR limits the use of algorithmic decision-making, granting individuals “the right not to be subject to a decision based solely on automated processing” when those decisions have significant effects on the individual. The policy requires that automated decision-making entities provide “meaningful information about the logic involved” in their decisions, which has been interpreted as establishing a right to explanation,84 though lawyers and legal scholars continue to debate whether the rule guarantees an explanation of specific decisions or merely information about the system's overall rationale. Meanwhile, explainability (or interpretability) has become a critical objective within the ML community as it seeks to legitimate decisions that, by definition, lack human oversight.85
In addition to algorithmic accountability being caught up in these larger regulatory moves, the EU launched a study titled “AlgoAware” to build algorithmic awareness and improve media literacy around algorithms. To date, the project has reviewed activities in algorithmic recommendation, credit scores, and programmatic advertising. Meanwhile, online disinformation has become a subject of intense global scrutiny. Solutions have been devised (such as the EU's Code of Practice on Disinformation, a self-regulatory agreement among industry actors86), but given their newness they have yet to be truly tested. While some of these developments only touch on the concept of algorithmic accountability indirectly, they nonetheless offer examples to learn from and tools to use.
Cultural protectionism is nonetheless a factor in the EU's more economic approach, though more in terms of ensuring the financial success of domestic industries. The EU's review of “audiovisual media services” has questioned algorithmic recommendations by large content-streaming or -sharing platforms, most of which are based in the United States. As in Canada, fears that the dominance of these American companies will lead to the dominance of American productions and the weakening of both cultural diversity and local industries has led the EU to specify country-based content production quotas. Article 13 of the original 2010 Audiovisual Media Services Directive (AVMSD) states: “Member States shall ensure that media service providers of on-demand audiovisual media services under their jurisdiction secure at least a 30% share of European works in their catalogues and ensure prominence of those works.”87 A 2018 revision of the AVMSD elaborates on the means by which “prominence” for European productions could be achieved:
Prominence can be ensured through various means such as a dedicated section for European works that is accessible from the service homepage, the possibility to search for European works in the search tool available as part of that service, the use of European works in campaigns of that service or a minimum percentage of European works promoted from that service's catalogue, for example by using banners or similar tools.88
The difference here is that the EU, as a multinational institution, lacks the motivation to produce a distinct culture through algorithmic recommendation unlike national cultural institutions elsewhere. Nonetheless, though not a direct matter of algorithmic accountability, the production quota will necessarily affect what content platforms have available to recommend.
The United States has taken a somewhat idiosyncratic approach to broaching the idea of algorithmic accountability, though no major legislative action has arrived yet. Since 2018, Republican politicians and their media allies have criticized Google, Facebook, and Twitter for supposedly being biased against conservative content and users. These wide-ranging attacks touched on discoverability concerns in claiming that Google's search algorithm and Facebook's News Feed algorithm favor left-leaning news sources over right-leaning ones and promote disproportionately negative news stories about Republicans.89 While some politicians, including President Trump, have threatened to pursue regulatory solutions to this purported problem, thus far the only concrete results have been congressional hearings,90 though in response to these claims Facebook undertook their own internal audit for political bias.91 Discoverability as a policy issue in the United States can thus be seen as a proxy for partisan political fighting (or, more generously, as the most recent iteration of a long-standing debate about political bias in the media), rather than the result of a conviction that government has a role to play in regulating access to online content—a conviction that would likely be difficult to square with the commitment to protecting “free speech” that supposedly motivates these complaints.
Conclusion
While these three conceptual framings of comparative media governance—cultural protectionism, economic independence, and concerns with political bias—are mostly nascent, these different national approaches hint at the variable futures for algorithmic accountability in content discovery. In taking such a perspective, there is a challenge in whether approaches to algorithmic accountability follow traditional models of comparative media systems or may better be understood through comparative technology policy.92 Given this uncertainty, future research needs to map the distinct framings of algorithmic accountability beyond these cases and moving toward a more comprehensive and systematic framework by building on what we have developed in this article.
Whatever the state of algorithmic accountability, we should be wary of framing the problem purely as a lack of regulation. As we have shown, algorithms govern the spread and promotion of culture by technical means. Content platforms are largely left to self-regulate their own inputs and code, while their algorithms, following the dictates of their optimalities, automatically regulate each user's access to culture, essentially generating individually tailored, constantly changing cultural policies. These bespoke policies are based not on publicly debatable goals like cultural protectionism or economic independence but on inferred user preferences and the unknowable optimizations chosen by firms or development teams. The outcomes of these policies are as inscrutable as their intentions—under our current system of platform governance, it is beyond our reach to know whether algorithmic regulation is discriminatory or radicalizing or otherwise undermines the values that guide public policy. If there is to be a future of democratically accountable algorithms, then it will be through acknowledging the consequences of algorithms for media regulation, policy, and governance.
Footnotes
UNESCO.
Ibid., 81.
Just and Latzer, 255.
McKelvey and Hunt.
Gillespie; Möller et al.
Rogers.
Introna and Nissenbaum.
Diakopoulos; Thorson et al.; Helberger, “On the Democratic Role of News Recommenders.”
Massanari; O'Callaghan et al.
Lascoumes and Le Gales; Lascoumes, Simard, and McCoy.
For notable exceptions, see Yeung and Musiani.
Gay et al.
Ananny and Crawford, 973.
Bucher, If … Then, 20–28.
Braman.
Helberger, Karppinen, and D'Acunto.
Myers West.
Lash and Dragos.
Bishop, “Anxiety, Panic and Self-Optimization”; “Managing Visibility on YouTube”; Bucher, “Cleavage Control”; If … Then; Cotter.
McKelvey; Plantin et al.
Eubanks; Pasquale.
Hindman.
Langlois; McKelvey and Hunt.
Helberger, “On the Democratic Role of News Recommenders”; Helberger, Karppinen, and D'Acunto; Helberger, “Exposure Diversity as a Policy Goal.”
Jannach et al.
Smith and Linden.
Portugal, Alencar, and Cowan.
Ibid., 206.
Pariser.
Tufekci.
Dubois and Blank.
Fletcher and Nielsen.
Tripodi, 47.
McKelvey and Hunt.
Smith and Linden, 17.
Broussard.
Orphanides.
Bucher, “The Algorithmic Imaginary”; Toff and Nielsen.
Cheney-Lippold.
Tran.
Sutton; Vranica and Marshall.
Keller; Montgomery.
Elmer, Langlois, and Redden; Rieder, Matamoros-Fernández, and Coromina; McKelvey and Hunt.
Freedman, 14.
Yeung.
McKelvey.
Freedman, 14.
Lascoumes and Le Gales, 10.
Roberts; Gray and Suri.
Bolukbasi et al.
Ananny, 103.
Powles and Nissenbaum.
Hoffmann.
See Benjamin for further discussion of these issues.
Hoffmann.
Nissenbaum, 73.
Competition Bureau.
Waldie and Kiladze.
Noble.
Crawford.
Hunt.
Pasquale.
Burrell.
Mittelstadt et al.
Neff and Stark; Henderson.
McKelvey.
Winning.
Bucher, “The Algorithmic Imaginary.”
Hoffmann; Rosenblat.
Deeth.
Perrin and Jiang.
Hindman.
Angwin and Mattu.
Montgomery.
Andrews.
Kalogeropoulos et al.
For more on the application of situated knowledges to algorithms, see Draude et al., and Luka and Millette.
Freedman, 14.
Helberger, “On the Democratic Role of News Recommenders.”
Puppis and Winseck.
Raboy.
Lobato and Scarlata.
Goel.
Selbst and Powles.
Gill and Hall.
“Code of Practice on Disinformation.”
European Parliament, “Audiovisual Media Services Directive.”
European Parliament, “Audiovisual Media Services Directive (Revision).”
Romm, “Facebook Got Grilled in the UK on Privacy.”
Romm, “Trump Signals He May Not Seek.”
Anderson.
Jasanoff; Hallin and Mancini.
Bibliography
Author notes
Both authors contributed equally; names are in alphabetical order.