Abstract

The word “algorithm” is best understood as a generic term for automated decision-making. Algorithms can be coded by humans or they can become self-taught through machine learning. Cultural goods and news increasingly pass through information intermediaries known as platforms that rely on algorithms to filter, rank, sort, classify, and promote information. Algorithmic content recommendation acts as an important and increasingly contentious gatekeeper. Numerous controversies around the nature of content being recommended—from disturbing children's videos to conspiracies and political misinformation—have undermined confidence in the neutrality of these systems. Amid a generational challenge for media policy, algorithmic accountability has emerged as one area of regulatory innovation. Algorithmic accountability seeks to explain automated decision-making, ultimately locating responsibility and improving the overall system. This article focuses on the technical, systemic issues related to algorithmic accountability, highlighting that deployment matters as much as development when explaining algorithmic outcomes. After outlining the challenges faced by those seeking to enact algorithmic accountability, we conclude by comparing some emerging approaches to addressing cultural discoverability by different international policymakers.

Algorithms can act as instruments of cultural policy. Though debated, this statement reflects a growing acknowledgment that algorithms influence cultural expression and access to cultural content. By way of introduction, the UN Convention on the Protection and Promotion of the Diversity of Cultural Expression is a key document of global cultural policy that seeks “to protect and promote the diversity of cultural expressions” and “create the conditions for cultures to flourish and to freely interact in a mutually beneficial manner.”1 UNESCO's 2018 report on the Convention notes that information and technology firms do change the conditions of cultural production and that “Google, Facebook, Amazon and other large platforms are not simply ‘online intermediaries,’ they are data companies and, as such, make every possible effort to safeguard and fully exploit their primary input.”2 In rejecting the label intermediaries, the report emphasizes the transformative power of data-optimized algorithms for the future of cultural expression and access to culture.

The United Nations (UN) is not alone in recognizing the far-reaching implications of algorithms as policy instruments in cultural settings. Natascha Just and Michael Latzer's claim that “algorithmic selection on the Internet tends to shape individuals' realities and consequently social order.”3 Our past research has found that algorithms play an important role in online content discoverability.4 Our discoverability framework draws on a growing literature about the modest but important impact of algorithms that classify, sort, filter, rank, and recommend content online.5 These concerns first gained prominence with the rise of search engines.6 As these searches became more popular—indeed, fundamental to navigating the Internet—critics noted how search algorithms increased the salience of some content and relegated other content behind the screen.7 And as search becomes more personalized, algorithms became a flashpoint for concerns about polarization and filter bubbles. Journalism scholars emphasized the ways algorithms influence the salience of certain types of news based on recommender algorithms' estimations of what is relevant to users.8 Similarly, algorithms for classifying and valuing online behaviors may amplify extremist content and views through personalized recommendations.9

While there is a large literature evaluating policy instruments,10 there are few frameworks to consider algorithms as policy instruments or forms of regulation, especially in cultural contexts.11 Our article proposes a framework to evaluate the barriers against holding algorithms publicly accountable as instruments of cultural policy. Building on cultural studies' use of circuits and moments to interpret culture,12 we identify three moments (input, code, and context) to evaluate how different algorithms act as part of media policy in cultural contexts. These moments do not simply offer the chance to make algorithmic regulation transparent, but provide opportunities to situate algorithms within larger systems of power and structural inequity.

Our framework contributes to the growing scholarship on algorithmic accountability that ultimately seeks to reveal the systems that code algorithms and create institutions of public, democratic governance for these technical forms of regulation. This article follows in the wake of Mike Ananny and Kate Crawford's perceptive insight that calls for transparency or open code “come at the cost of a deeper engagement with the material and ideological realities of contemporary computation.”13 We frame inputs, codes, and contexts as moments of algorithmic agency in cultural and media policy that correspond to regulatory issues that will likely be addressed by an emerging regulatory agenda (at least in some parts of the world), which we discuss in the conclusion.

Algorithms, Content, and Discontent

An algorithm can be broadly defined as a set of instructions to solve a problem or perform a task. In popular discussions of digital media, “algorithm” has become shorthand for an operating procedure that relies on data and software.14 These procedures can be written by human coders, or engineers can apply an approach called machine learning (ML) where the algorithms “learn” how to carry out tasks under various levels of human oversight.

Algorithms have been applied to all areas of information policy: information creation, flows, use, and, of particular importance to us, processing.15 Algorithms:

  • Create content such as retrospective year-in-review videos on Facebook and, increasingly, through advanced ML libraries like OpenAI's GPT-2 or deepfakes

  • Manage flows through active recommendations16 and content caching as well as negative shadow bans that limit the appearances of certain messages on social media17

  • Mediate interaction with information often through the array of signals, for example, likes and comments, that inform information flows and content discoverability

  • Rank and filter information in ways that create incentives and conditions of interaction similar to markets or system engineering18 that creators must learn and “game” to succeed online19

Together, algorithms work in collaboration to become infrastructures, platforms, or hybrids of the two that increasingly coordinate social, economic, and cultural activity.20

Applications of algorithms in finance, hiring, price manipulation, and risk assessment prompted wider reflection about the nature of automated decision-making across government and private industry.21 Regulators and researchers have begun asking questions about anticompetitiveness and market share among the dominant companies in the field of digital content and whether algorithms may have biases that encourage concentration.22 As Hindman notes in his deft analysis of the digital economy, algorithmic content recommendation favors larger firms with bigger catalogs of information, creating powerful lock-in effects that diminish the competitiveness of smaller firms. Policy has shifted as a result. In April 2018, the European Union (EU) proposed a regulation “promoting fairness and transparency for business users of online intermediation services” as part of their Digital Single Market strategy.

Algorithms automate media and cultural policy. Though platforms or technology companies may not accept this statement (at least when appearing before regulators), it reflects growing consensus in the literature. When applied to cultural content, algorithms become technical means of calculating complex matters of reception and interpretation, such as relevance, taste, enjoyment, and personality.23 Natali Helberger has made crucial contributions in this area, drawing connections between algorithmic recommendation and long-established goals of media policy, such as exposure diversity, and proposing conceptual frameworks for policy regarding recommender systems, including theories of democracy.24

There are four major approaches to information processing with direct application to media and cultural policy,25 while new advances suggest the potential for radical change in the recommendation landscape through ML. These approaches are

  1. Content-based: This kind of recommender seeks to match a user's taste profile to specific items that the system guesses the user will like. Individual pieces of content are tagged and categorized (often in multiple ways), and once users show interest in a specific tag or category, they are directed to other items in that same grouping.

  2. Knowledge-based: These recommenders are usually implemented on platforms where user behavior is infrequent or past behavior is a poor predictor. Given a lack of existing information, these systems solicit guidance directly from the user to establish preferences. A drawback for platform operators is that knowledge-based and content-based approaches require them to obtain and manage a significant amount of information about the items in their catalog.

  3. Collaborative: Often called collaborative filtering, this system avoids the dilemma of needing catalog information by matching users to other users rather than to items. This method was popularized by Amazon and their well-known “customers who bought this item also bought” approach to making suggestions.26 Collaborative filtering leverages the data contained within a mass of individual profiles to find shared interests between users and then looks for the “missing” items. Importantly, the system does not need to know that these users share an interest in a specific genre or creator, it only needs to analyze their behavior for commonalities. Amazon's recommender system has been widely influential and was adopted or adapted by other major platforms, including Netflix and YouTube.

  4. Context-aware: As accurate recommendations have come to be seen as a source of value for platforms, other advances have shifted the terrain. The widespread adoption of mobile devices has made possible the development of context-aware recommendations.27 While contextual information such as the time or one's IP address were already detectable through web browsing, mobile devices have allowed finer-grained data such as one's exact geographic location to be included in recommender systems' calculations.

  5. ML: ML is increasingly being used in recommender systems. ML “uses computers to simulate human learning and allows computers to identify and acquire knowledge from the real world, and improve performance of some tasks based on this new knowledge.”28 ML enables recommender systems to “learn” from their mistakes, independently gauging the success of their recommendations and adapting rapidly in response to implicit or explicit feedback. Advances in ML could allow recommender systems to devise new ways to calculate recommendations, uncovering patterns in user behavior that human engineers haven't considered looking for yet.

These different approaches to algorithmic design have consequences when processing cultural content. For example, ML could have a major impact on content-based recommendation as an algorithm could be trained to analyze, sort, and classify content, including new arrivals to the catalog, as well as look for patterns within and across it. Previously only humans could judge content in this way, but if machines can, metaphorically speaking, watch every single item in Netflix's database and index them in unprecedented ways, content-based recommendation could move in unforeseeable directions.

Another promising possibility of ML is that popularity could matter less. For example, if a product has never been purchased on Amazon or a YouTube video has single-digit views, it is unlikely that a collaborative system will recommend these items to anyone (unless the producer pays for more direct promotion). ML algorithms could solve this outlier problem—provided they're not programmed to prioritize popularity—and recommend things that lack visibility or an existing fan base. New avenues of discovery could open. Let loose on a streaming platform's archive of musical recordings and data about our listening habits, an ML algorithm designed to “learn” about musical attributes could discern patterns in our consumption histories (favored chords, rhythms, lyrical motifs, etc.) that we are unable to identify ourselves without training and/or time. With these musicological, rather than social, patterns in mind, listeners might be recommended songs, artists, or genres it would never have occurred to them to try, or works that struggle to be discovered due to lack of promotion.

Alternatively, these characteristics may be poor predictors of enjoyment, and an algorithmic recommender may never be able to calculate why someone prefers Carly Rae Jepsen to Justin Bieber. Despite popular enthusiasm for ML, software will likely continue to struggle with the broader and deeper social and affective dimensions of engaging with culture—the unquantifiable influence of feelings, friendship, and fandom. While our platforms and devices will continue to collect ever more and ever more detailed data about our behavior, the concept of “context-aware recommendations” masks the fact that these algorithms make judgments that translate into code the irreducibly complex nature of what makes up a “context” when it comes to our experiences of culture.

From these general concerns earlier, we identify a few specific concerns about the use of algorithms in cultural contexts.

Solipsism or Serendipity?

Algorithmic recommendations may increase social isolation and diminish public culture by restricting the salience of cultural expressions. The ubiquity of this algorithmic solipsism leads to polarized and self-interested understandings of the world. Critics who hold this view argue that personalized algorithms trap users in partisan information bubbles when it comes to the news, while they provide a restrictive diet of homogenous content when it comes to culture.29 Some scholars have claimed that these “filter bubbles” can amplify extremist points of view. Scholar and writer Zeynep Tufekci has warned about the possibility of algorithmic radicalization, where automated recommendations insidiously push users toward the fringes of political discourse.30

However, the filter bubble hypothesis has been disputed and its effects assailed as exaggerated.31 For example, journalism scholars Fletcher and Nielsen recently found that using Google to search for news stories leads to what they call “automated serendipity,”32 where search engine users are in fact exposed to a wider range of sources reflecting a diverse range of political positions.

Though the filter bubble thesis has been too fervently embraced, its appeal endures due to a less contentious claim: there is a lack of transparency around algorithmic inputs and code. For example, sociologist Francesca Tripodi observed that some people primarily motivated to use Google to “expand out from their ideological positions” remain “unaware that their queries will simply return information that reaffirms their beliefs depending on what phrase they choose to type into the search engine.”33 In other words, the outcomes of Google's algorithmic sorting are so difficult to anticipate that, as seen in Fletcher and Nielsen's study, some users read news sources they might have otherwise ignored, while, as in Tripodi's fieldwork, other users ideologically prefilter their search results even when explicitly trying to do the opposite.

Revenue Demands

A major problem with algorithmic recommendation is that the business models of content discovery platforms can supply the underlying logic rather than considerations of what is most beneficial to users or communities. The drive to demonstrate growth to shareholders can incentivize companies to develop algorithms focused primarily on keeping users glued to their screens and exposed to advertising. Or, in the case of platforms like Netflix that produce their own content, recommendations can be skewed to emphasize their own products above other, possibly more relevant results.34

Consider the effects of news aggregators programmed to optimize their filters to maximize user engagement. By valorizing attention-grabbing content that provokes an intense response, are we replacing traditional journalistic norms like balance and objectivity (however unrealized those norms may be in legacy media)? The purpose of ranking and recommending, broadly speaking, is to please readers and keep them immersed in the flow of content. What happens to democracy when transposing this model to journalism? For example, news stories about American politics might engage readers outside of the United States, but does promoting these stories over less sensational local news leave them uninformed? Turning to cultural productions, what are the pitfalls of making content “sticky?” Who wins and who loses when engagement matters most?

User Data and Privacy

Considering how platforms measure engagement leads us to privacy concerns. Most of the established means for measuring engagement online are relatively simple, such as views of a webpage, shares on social media, or time spent reading, watching, or listening. These inputs already require user surveillance, but recommender system engineers continue to search for ways to dig deeper. In an article reviewing Amazon's groundbreaking collaborative filtering algorithm, two of its creators, Brent Smith and Greg Linden, speculate about the future of content recommendation; they claim that “discovery should be like talking with a friend who knows you, knows what you like, works with you at every step, and anticipates your needs.”35 Would digital assistants achieving this level of intimate attentiveness be in the best interests of Amazon's shareholders or its customers? Should online retailers or content platforms gain deep insight into our inner lives simply to give us better recommendations? Is the pursuit of “better” recommendations merely cover for more lucrative user data mining?

The Limits of Artificial Intelligence

The belief that in interacting with recommender systems we are dealing with a form of intelligence can be misleading. Most people have experienced clicking on a piece of content and seeing their recommendations turn alien or irrational—absentmindedly click on a viral video and now YouTube's algorithm thinks you're obsessed with kittens. Recommender systems cannot make the kind of sensitive, multidimensional judgments that humans can, which is why platforms use an ambiguous, neutral term like “engagement” to define their goal. These systems can never understand our personal, social, and cultural context fully due to their reliance on behavioristic proxies (clicking a link) to guess at correlated attributes (loves kittens).

For example, YouTube's deep-learning algorithm cannot tell when you watched a video because your friends were mocking it, you're researching conspiracy theories for school, you fell asleep with autoplay on, or your sister borrowed your phone—it only registers that you engaged with the video. As former data journalist and software developer Meredith Broussard puts it, “we talk about computers as being able to do anything, and that's just rhetoric because ultimately they're machines, and what they do is they compute, they calculate, and so anything you can turn into math, a computer can do.”36 Platforms count and measure signals to deduce our experiences of or feelings about content, but not every input is as clear as clicking the thumbs-up button and not all feelings and experiences can be translated into math.

Artificially intelligent recommender systems can also behave in unpredictable ways, and fully automating online discoverability could have aberrant results. For example, YouTube has been criticized for recommending disturbing content to children, or, in Wired UK's vivid description, “churning out blood, suicide and cannibalism.”37 The ML-powered algorithm appears to have judged the success of its recommendations entirely on whether users engaged with the content (i.e., how much time they spent watching it, which for very young children might be an especially meaningless metric). The culprit behind this phenomenon remains a mystery, illustrating some of the obstacles to algorithmic accountability that we will discuss later, such as the difficulty of tracing problems to their sources and these systems' vulnerability to manipulation by humans and bots.

Even in more quotidian situations, users can be perplexed or made uneasy by algorithms, prompting them to conjure theories to explain why the algorithm failed or misunderstood certain choices.38 In situations where users interact with algorithmic filters and recommendations over time, some may come to accept algorithmic categories as a scientific determination of their taste or interests.39 As platforms filter content in and out of users' feeds, one's “personal” taste becomes something developed in collaboration with algorithmic recommendation.

An Information Vacuum

Individual users aren't the only ones confronting a dearth of information about algorithmic outcomes on platforms. Indeed, even before considering the opacity of algorithmic processes, we currently lack ways to verify platforms' claims about something as simple as a view or play count. The problem is that most data about what content attracts attention comes from the platforms themselves or from affiliated data brokers. Nielsen advertising, for example, collects data supplied by Facebook, YouTube, and Hulu when compiling its online viewership statistics.40 That the same companies that generate revenue by selling advertising measure their own viewership is problematic, causing ongoing concerns about overreporting41 or misleading audience metrics.42 Meanwhile, some platforms that don't rely on advertising, such as Netflix, refuse to disclose any audience data at all or do so only to boost publicity.

In the digital advertising industry, similar concerns have led to independent audits of ad impressions through firms like the Media Ratings Council. A shift to external review has not yet happened for other online metrics even though there is historic precedent. Audience estimates for television and radio were established by third parties like Nielsen or polling firms which supplemented the data disclosed by the cultural industries. In the platform era, efforts to develop independent sources of data face the challenge of potentially violating the platforms' terms of service or privacy agreements, and, furthermore, they require developing complex methods to investigate highly unstable objects of research.43

Algorithms as Regulation, Optimization as Policy

Des Freedman's distinction between media policy and media regulation helps us to understand how algorithms may act as regulatory instruments. If media regulation refers to “tools that are deployed on the media to achieve established policy goals,”44 then algorithms can be considered forms of regulation that calculate how to process cultural content.45 Algorithms are optimized, which means they are evaluated in comparison to theoretical models, hypothetical ideal states, or code considered state of the art. The optimal encodes how an algorithm processes information.46 If “media policy refers to the development of goals and norms leading to the creation of instruments that are designed to shape the structure and behavior of media systems,”47 then an algorithm's optimality can be considered a kind of cultural policy, but, crucially, one that is hidden from scrutiny.

The line between the policy and the instrument is blurred in algorithmic decision-making, much as it is in public policy. As Pierre Lascoumes and Patrick Le Galès argue, “instruments at work are not purely technical: they produce specific effects, independently of their stated objectives (the aims ascribed to them), and they structure public policy according to their own logic.”48 The policy informs the instrument and the instrument informs the policy. The decision to use algorithms as tools of governance treats the problems of media systems as technical challenges capable of being solved by engineers, which, in turn, reflects a worldview circumscribed by the need to create rule-based order and manage cultural expression through code. We can see this logic at work in large platform companies' preference to manage their systems through algorithms rather than hire more staff, or, more accurately, to publicly proclaim the efficacy of automated regulation while hiding their reliance on a shadow workforce of human contractors and outsourced employees.49

We identify three moments that provide opportunities to consider the policy implications of algorithms:

  1. Inputs, signals, ground truths, and training data that inform the operations of an algorithmic system

  2. Written code, black-boxed algorithms, libraries, and other technical agents operating in tandem in a system

  3. Contexts where the expected operation of a system meets its real-world application

These moments occur simultaneously, but distinguishing them helps clarify which agents make decisions, who or what decides whether these decisions are high quality, and how these practices might be made more accountable.

Inputs may unintentionally make policy decisions by ignoring bias. For instance, ML usually requires massive datasets so the algorithm can discover meaningful patterns, but those patterns may end up reflecting systemic inequity contained within the data. For example, an ML algorithm trained on a corpus of English-language texts associated the word “genius” with men and “model” with women.50 Is this discriminatory outcome the fault of the algorithm, the developers, or a dataset that reflected a history of sexist assumptions? In other words, how a platform's engineers program its recommender system establishes the system's values and some will inevitably be excluded. This concealment occurs through how algorithms are optimized and through the choice to use algorithms to manage cultural expression in the first place.

Thinking of algorithms as policy instruments implies reconsidering coding as policymaking. Coding can conceal making cultural policy choices as solving purely technical problems. Communications scholar Mike Ananny argues that algorithms are presented as objective science, but the ways they process information “raise ethical concerns to the extent that they signal certainty, discourage alternative explorations, and create coherence among disparate objects—categorically narrowing the set of socially acceptable answers to the question of what ought to be done.”51 Algorithms also black box policy by translating its norms and rules into inscrutable, often proprietary systems that cannot be publicly scrutinized due to intellectual property or security concerns.

Algorithmic regulation can vary in its efficacy depending on the content. If inputs and code, as we discuss in the following, create enough structural barriers then algorithms may not be effective, or worse, may perpetuate the problem. The idea that we can “fix bias” by better optimizing algorithmic systems rests on the assumption that every social problem has a technical solution and evades urgent questions about how technologies like facial recognition are used to perpetuate injustice irrespective of the efficacy of their algorithms.52 As Anna Lauren Hoffman has argued, efforts to address data-driven discrimination often fail to do so in ways that reckon with the complexity of social hierarchies and intersecting forms of discrimination; likewise, positive forms of discrimination that advantage some people over others are rarely considered.53

To think about this more concretely, consider the question of whether a content recommendation system should factor race or gender into its calculations (rather than pretend to ignore it). As seen in activist campaigns like #OscarsSoWhite, the lack of representation and opportunity for actors, directors, writers, and other workers from historically marginalized communities is a major concern within most cultural industries as well as among people who watch its creations. As algorithmic recommendations grow in influence, their role in fostering inequity must be scrutinized. Should streaming video platforms or news recommendation systems change their code to promote content that helps to increase visibility for underrepresented people and stories?54 Or could inputting this kind of data have harmful consequences? Whatever the case, neither the data nor the algorithms are neutral. The platforms that design and implement these algorithms must acknowledge systemic inequities and develop remedies, lest they reproduce and amplify injustice in the status quo.55

A Framework for Evaluating Algorithmic Policy Instruments

In the following section, we elaborate our framework to consider how algorithms, as means of regulation, create barriers to accountable cultural policy. Table 1 summarizes this discussion, including some of the examples we mention in the following. At the operational level, accountability requires agents “to answer for their work, signals esteem for high quality work, and encourages diligent, responsible practices.”56 Achieving accountability leads to systems that perform better to higher standards. Governments and other policymaking bodies are, at best, at the beginning stages of creating regulatory frameworks to address, let alone enforce, algorithmic accountability, and strategies vary considerably across national borders, suggesting that comparative research is necessary.

TABLE 1

Obstacles to Accountability

MomentObstaclesExamples
Inputs Proprietary data
Systemic inequity and prejudice in data Personalization Localization 
Boston's Street Bump app; comparing Google results; Facebook's trending stories 
Code Black boxed
Traceability
Instability
Unpredictability
Lack of diversity 
Spotify's Discover Weekly; children's YouTube recommendations; walkouts by Google employees 
Context Anticipated use or goals
Changing media habits
Vertical integration
Manipulation 
Fan campaigns; Microsoft's chatbot Tay; Netflix Originals; ubiquitous media consumption 
MomentObstaclesExamples
Inputs Proprietary data
Systemic inequity and prejudice in data Personalization Localization 
Boston's Street Bump app; comparing Google results; Facebook's trending stories 
Code Black boxed
Traceability
Instability
Unpredictability
Lack of diversity 
Spotify's Discover Weekly; children's YouTube recommendations; walkouts by Google employees 
Context Anticipated use or goals
Changing media habits
Vertical integration
Manipulation 
Fan campaigns; Microsoft's chatbot Tay; Netflix Originals; ubiquitous media consumption 

Inputs

Proprietary Data

The data used in algorithmic decision-making is often confidential and not easily accessed by public institutions. Data is a known strategic asset, giving a competitive advantage to companies with direct access and the ability to aggregate data.57 User data is a particularly valuable form of capital, as recent revelations about Facebook giving special access to preferred partners has made evident.58 The critical importance of data to ML further increases its value and motivates companies to restrict access.

Systemic Inequity and Prejudice in Data

While often framed as politically neutral, data is a representation of an inequitable world. If used naively, data will reflect and reproduce established injustices. By looking at Google Search results, Safiya Umoja Noble exposes the uneven, and largely pornographic, political economy of digitized racialized bodies. Search, Noble argues, obfuscates deeply inequitable conditions of production as an objective reflection of reality.59 Comparatively, AI Now cofounder Kate Crawford has written about Boston's use of the Street Bump app, which used smartphone data to automatically detect road conditions, pinpointing potholes in need of repair.60 However, with smartphone adoption as low as 16 percent in the city at the time and inevitably less accessible to lower income communities, the app abetted discrimination in the allocation of vital municipal services. These concerns matter in cultural contexts. What prepackaged analytics developed for advertising and marketing might find their way into cultural policy through algorithms and data? In light of concerns about social media analytics discussed earlier, how might these inputs have bias or miss important cultural indicators?61

Personalization

Algorithmic filters are almost always personalized through one of the techniques discussed earlier, and so all perspectives on a given system are just that: one perspective. No single outcome can be assumed to be indicative of the overall system, and it is impossible for even an expert user to deduce how the algorithm functions in the abstract based on individual results. When sharing anecdotes about political bias in Google news results, for example, one must keep in mind that not every user will see the same results for the same search query.

Localization

As with the problem of personalization, context-aware algorithms vary their outcomes according to geography, particularly the different ways it can be signaled by localization, user preference, mobile GPS location, or IP address. For example, Facebook in Canada used to restrict its “trending stories” feature to certain linguistic regions, showing English-speaking Canadians the list without doing the same for French-speaking Canadians. More broadly, do systems developed in English perform differently in other languages?

Code

Black Boxes

Algorithmic systems are often referred to as “black boxes” because users cannot see how they arrive at their outcomes.62 With the advent of ML, we face a situation where these decisions are not only difficult for users to understand but for the algorithm's developer as well. Black boxing is often deliberate. Knowledge of the operation of Google's search or Spotify's Discover Weekly algorithms represents a competitive advantage for these companies and will not be readily disclosed. The Criminal Code (for acts such as computer mischief) and new trade law, such as the pending United States–Mexico–Canada Agreement, may restrict access to source code, establishing formidable legal barriers to oversight. However, as Jenna Burrell has pointed out, algorithmic opacity can also stem from unintentional factors, such as the fact that “reading” code generally requires special training or that ML algorithms by definition change during use—simply “opening” the box, so to speak, may not be illuminating.63

Traceability

Algorithms function as components in distributed systems where responsibilities are shared. Recommendations and other forms of filtering do not result from any one part but from the interactions among many. When algorithmic processes lead to harmful outcomes, traceability—discovering the source of the failure, assigning responsibility for it, and punishing or at least educating those responsible—becomes a key ethical concern.64 As in the case of children's YouTube recommendations, tracing problems to their source can be particularly daunting in ML systems, where the developer, the training data, or malicious users could all be the origin.

Instability

Most platforms constantly develop and release updates to their code.65 Cloud computing enables these changes to occur at scale and often without indication to the user. As a result, all observations of algorithmic systems must be sensitive to the time and state of their unstable object of research. Instability necessitates continual monitoring to detect changes.

Unpredictability

Algorithmic systems, especially ML and multiagent ones, exhibit emergent behavior that cannot always be anticipated. Software bugs and complexity introduce further unpredictability that compounds uncertainty about intent. Devising an algorithm to solve one problem may lead to it inadvertently creating new ones, such as copyright enforcement or antipiracy algorithms blocking users or content they weren't intended to police.66

Lack of Diversity in Development Teams

Algorithmic processes may fail or further marginalize people and communities if they perpetuate the status quo. A significant obstacle to ameliorating this problem is a lack of diversity in the technology industry, where 76 percent of technical jobs are held by men and the overall workforce is 95 percent white,67 meaning that underrepresented voices go unheard and important perspectives are excluded.

Context

Anticipated Use or Goals

What goals do algorithmic systems hold? What optimal behaviors are they trying to nudge users toward? How do these automated systems determine whether they have succeeded? These questions concern optimization, and these principles are usually neither public nor knowable through digital methods. Algorithmic processes can seem enigmatic or uncanny to users because they don't know what optimal condition the algorithm is trying to achieve or maintain.68 Furthermore, ensuring that algorithmic systems are functioning optimally requires testing, including gauging the effects of delivering suboptimal services, which can further undermine users' and creators' trust as well as reinforce already existing inequities.69

Changing Media Habits

Assessing the strength of the effects of new technologies like algorithmic filters and recommendations requires asking fundamental questions about media use: what media do people consume? When, how, and why? These traditional concerns remain relevant as media consumption occurs in new contexts. While the postwork evening hours are still a peak time for Netflix usage,70 online media consumption can take place at any time and most Americans check their phones multiple times a day.71 These shifting patterns of media use mean that tracking algorithmic influence is more difficult than, for example, estimating how many people are exposed to a television broadcast.

Vertical Integration

When platform companies produce or sell their own content, their recommendation algorithms' goals may not be in harmony with their users' interests and content creators may struggle to reach an audience.72 Vertical integration introduces potential conflict in the algorithm's function, overriding user benefits for corporate ones, representing an especially problematic instance of the lack of transparency around an automated system's goals. Amazon, for example, has been accused of manipulating their pricing algorithm to mislead users into thinking Amazon's own products were the cheapest option when they were not.73

Manipulation

How did Korean pop group BTS debut at the top of the US music charts? The group's global fan base coordinated a campaign to stream BTS's songs on shared American Spotify accounts.74 How music charts should gauge popularity in the postpurchase, streaming era is an unsettled issue,75 and BTS's manufactured triumph illustrates a flaw in automated systems: they are susceptible to adversarial users and coordinated manipulation. Furthermore, the value of data—particularly the data used to train ML algorithms—diminishes quickly if poisoned by trolls, manipulated by fans, or influenced by new, covert forms of marketing. Lay theories about “the wisdom of the crowd” may obfuscate the emergence of a new class of gatekeepers—highly active users who know how to influence algorithmic systems.76 As new methods of manipulation emerge, algorithmic accountability must be capable of addressing systems in adverse information environments.

Critical Questions for Algorithmic Accountability

We can summarize the preceding issues in the form of three questions and related subquestions:

  1. Are inputs to algorithmic systems justified?

    • a.

      Who decides what does and does not count? How might data reproduce systemic inequity and historical prejudices? Who owns or has privileged access to data? How do datasets differ from each other and from other sources of cultural knowledge?

  2. How do algorithms function as forms of regulation?

    • a.

      To what extent should automation be involved in cultural or journalistic recommendations? Does algorithmic regulation enhance or obfuscate policy objectives? How intentional is algorithmic optimization as an act of policymaking?

  3. Are algorithms an appropriate instrument in the known situation or cultural context?

    • a.

      Is the situation understood well enough to estimate the impact of using algorithmic regulation?77 What happens when users behave in unexpected ways? How to resolve conflicting user expectations? Which users take precedence in deciding optimal performance?

For future research, these questions form a framework to analyze different formulations of algorithmic accountability policy. Policy should address these issues when aiming for improved accountability for inputs, code, and contexts. The next stage for academic research will entail comparative research into how these challenges are understood and confronted by policymakers, as we discuss briefly by way of conclusion. With this framework, we provide a toolkit to understand the intersection between algorithms as instruments and the policies, norms, and goals that optimize them.

Toward Comparative Research on Global Algorithmic Accountability in Cultural Policy

Thus far, we have identified questions for specific implementations—the next steps require situating algorithmic regulation within larger systems of media governance. As Des Freedman explains, media governance is the “sum total of mechanisms … that aim to organize media systems according to the resolution of media policy debates.”78 What are the larger governance processes shaping algorithmic accountability? Natali Helberger has described how the diverse values guiding competing theories of democracy could optimize news recommendation algorithms differently.79 In a similar vein, we consider how existing systems of media governance rooted in distinct policy debates configure algorithmic accountability differently.

Based on our review of key platform governance documents assembled by Dwayne Winseck and Manuel Puppis (consisting of 52 different platform regulation inquiries, reviews, and proceedings from across the globe),80 we can begin to identify three framings for the governance of algorithmic regulation in culture and media policy.

The first frame consists of cultural protectionism. In countries like Canada, algorithmic accountability is closely associated with the discoverability of cultural content and how algorithmic recommendation and search affect access to local culture. Cultural institutions have expressed concern that algorithms may prevent Canadians from easily finding and engaging with Canadian content. Relatedly, concerns about discoverability have also reflected long-standing fears about the oversized role that foreign companies play in the Canadian media market.81 The Canadian approach to discoverability resonates with concerns in Australia and New Zealand, where discoverability has also been perceived as a problem of the accessibility and prominence of local content.82

However, cultural policies are not geared solely toward enriching the lives of local consumers. A second frame presents aspects of algorithmic accountability as critical to participation in the global digital economy. The EU has been the most active in these larger matters as they have begun to build a strong regulatory agenda, although India has also sought to regulate vertical integration in online realtors like Walmart and Amazon.83 This framing situates algorithms as central to the economy and other critical democratic systems that require greater accountability to protect human rights and to ensure free and fair markets as well as the success of domestic industries.

The EU's General Data Protection Regulation (GDPR), passed in 2018, may be the clearest expression of the prominence of digital systems in the regulatory agenda. The GDPR limits the use of algorithmic decision-making, granting individuals “the right not to be subject to a decision based solely on automated processing” when those decisions have significant effects on the individual. The policy requires that automated decision-making entities provide “meaningful information about the logic involved” in their decisions, which has been interpreted as establishing a right to explanation,84 though lawyers and legal scholars continue to debate whether the rule guarantees an explanation of specific decisions or merely information about the system's overall rationale. Meanwhile, explainability (or interpretability) has become a critical objective within the ML community as it seeks to legitimate decisions that, by definition, lack human oversight.85

In addition to algorithmic accountability being caught up in these larger regulatory moves, the EU launched a study titled “AlgoAware” to build algorithmic awareness and improve media literacy around algorithms. To date, the project has reviewed activities in algorithmic recommendation, credit scores, and programmatic advertising. Meanwhile, online disinformation has become a subject of intense global scrutiny. Solutions have been devised (such as the EU's Code of Practice on Disinformation, a self-regulatory agreement among industry actors86), but given their newness they have yet to be truly tested. While some of these developments only touch on the concept of algorithmic accountability indirectly, they nonetheless offer examples to learn from and tools to use.

Cultural protectionism is nonetheless a factor in the EU's more economic approach, though more in terms of ensuring the financial success of domestic industries. The EU's review of “audiovisual media services” has questioned algorithmic recommendations by large content-streaming or -sharing platforms, most of which are based in the United States. As in Canada, fears that the dominance of these American companies will lead to the dominance of American productions and the weakening of both cultural diversity and local industries has led the EU to specify country-based content production quotas. Article 13 of the original 2010 Audiovisual Media Services Directive (AVMSD) states: “Member States shall ensure that media service providers of on-demand audiovisual media services under their jurisdiction secure at least a 30% share of European works in their catalogues and ensure prominence of those works.”87 A 2018 revision of the AVMSD elaborates on the means by which “prominence” for European productions could be achieved:

Prominence can be ensured through various means such as a dedicated section for European works that is accessible from the service homepage, the possibility to search for European works in the search tool available as part of that service, the use of European works in campaigns of that service or a minimum percentage of European works promoted from that service's catalogue, for example by using banners or similar tools.88

The difference here is that the EU, as a multinational institution, lacks the motivation to produce a distinct culture through algorithmic recommendation unlike national cultural institutions elsewhere. Nonetheless, though not a direct matter of algorithmic accountability, the production quota will necessarily affect what content platforms have available to recommend.

The United States has taken a somewhat idiosyncratic approach to broaching the idea of algorithmic accountability, though no major legislative action has arrived yet. Since 2018, Republican politicians and their media allies have criticized Google, Facebook, and Twitter for supposedly being biased against conservative content and users. These wide-ranging attacks touched on discoverability concerns in claiming that Google's search algorithm and Facebook's News Feed algorithm favor left-leaning news sources over right-leaning ones and promote disproportionately negative news stories about Republicans.89 While some politicians, including President Trump, have threatened to pursue regulatory solutions to this purported problem, thus far the only concrete results have been congressional hearings,90 though in response to these claims Facebook undertook their own internal audit for political bias.91 Discoverability as a policy issue in the United States can thus be seen as a proxy for partisan political fighting (or, more generously, as the most recent iteration of a long-standing debate about political bias in the media), rather than the result of a conviction that government has a role to play in regulating access to online content—a conviction that would likely be difficult to square with the commitment to protecting “free speech” that supposedly motivates these complaints.

Conclusion

While these three conceptual framings of comparative media governance—cultural protectionism, economic independence, and concerns with political bias—are mostly nascent, these different national approaches hint at the variable futures for algorithmic accountability in content discovery. In taking such a perspective, there is a challenge in whether approaches to algorithmic accountability follow traditional models of comparative media systems or may better be understood through comparative technology policy.92 Given this uncertainty, future research needs to map the distinct framings of algorithmic accountability beyond these cases and moving toward a more comprehensive and systematic framework by building on what we have developed in this article.

Whatever the state of algorithmic accountability, we should be wary of framing the problem purely as a lack of regulation. As we have shown, algorithms govern the spread and promotion of culture by technical means. Content platforms are largely left to self-regulate their own inputs and code, while their algorithms, following the dictates of their optimalities, automatically regulate each user's access to culture, essentially generating individually tailored, constantly changing cultural policies. These bespoke policies are based not on publicly debatable goals like cultural protectionism or economic independence but on inferred user preferences and the unknowable optimizations chosen by firms or development teams. The outcomes of these policies are as inscrutable as their intentions—under our current system of platform governance, it is beyond our reach to know whether algorithmic regulation is discriminatory or radicalizing or otherwise undermines the values that guide public policy. If there is to be a future of democratically accountable algorithms, then it will be through acknowledging the consequences of algorithms for media regulation, policy, and governance.

Footnotes

The Canadian Commission for UNESCO (CCUNESCO) and the Government of Canada supported the production of this article.

1.

UNESCO.

2.

Ibid., 81.

3.

Just and Latzer, 255.

4.

McKelvey and Hunt.

5.

Gillespie; Möller et al.

6.

Rogers.

7.

Introna and Nissenbaum.

8.

Diakopoulos; Thorson et al.; Helberger, “On the Democratic Role of News Recommenders.”

9.

Massanari; O'Callaghan et al.

10.

Lascoumes and Le Gales; Lascoumes, Simard, and McCoy.

11.

For notable exceptions, see Yeung and Musiani.

12.

Gay et al.

13.

Ananny and Crawford, 973.

14.

Bucher, If … Then, 20–28.

15.

Braman.

16.

Helberger, Karppinen, and D'Acunto.

17.

Myers West.

18.

Lash and Dragos.

19.

Bishop, “Anxiety, Panic and Self-Optimization”; “Managing Visibility on YouTube”; Bucher, “Cleavage Control”; If … Then; Cotter.

20.

McKelvey; Plantin et al.

21.

Eubanks; Pasquale.

22.

Hindman.

23.

Langlois; McKelvey and Hunt.

24.

Helberger, “On the Democratic Role of News Recommenders”; Helberger, Karppinen, and D'Acunto; Helberger, “Exposure Diversity as a Policy Goal.”

25.

Jannach et al.

26.

Smith and Linden.

27.

Portugal, Alencar, and Cowan.

28.

Ibid., 206.

29.

Pariser.

30.

Tufekci.

31.

Dubois and Blank.

32.

Fletcher and Nielsen.

33.

Tripodi, 47.

34.

McKelvey and Hunt.

35.

Smith and Linden, 17.

36.

Broussard.

37.

Orphanides.

38.

Bucher, “The Algorithmic Imaginary”; Toff and Nielsen.

39.

Cheney-Lippold.

40.

Tran.

41.

Sutton; Vranica and Marshall.

42.

Keller; Montgomery.

43.

Elmer, Langlois, and Redden; Rieder, Matamoros-Fernández, and Coromina; McKelvey and Hunt.

44.

Freedman, 14.

45.

Yeung.

46.

McKelvey.

47.

Freedman, 14.

48.

Lascoumes and Le Gales, 10.

49.

Roberts; Gray and Suri.

50.

Bolukbasi et al.

51.

Ananny, 103.

52.

Powles and Nissenbaum.

53.

Hoffmann.

54.

See Benjamin for further discussion of these issues.

55.

Hoffmann.

56.

Nissenbaum, 73.

57.

Competition Bureau.

58.

Waldie and Kiladze.

59.

Noble.

60.

Crawford.

61.

Hunt.

62.

Pasquale.

63.

Burrell.

64.

Mittelstadt et al.

65.

Neff and Stark; Henderson.

66.

McKelvey.

67.

Winning.

68.

Bucher, “The Algorithmic Imaginary.”

69.

Hoffmann; Rosenblat.

70.

Deeth.

71.

Perrin and Jiang.

72.

Hindman.

73.

Angwin and Mattu.

74.

Montgomery.

75.

Andrews.

76.

Kalogeropoulos et al.

77.

For more on the application of situated knowledges to algorithms, see Draude et al., and Luka and Millette.

78.

Freedman, 14.

79.

Helberger, “On the Democratic Role of News Recommenders.”

80.

Puppis and Winseck.

81.

Raboy.

82.

Lobato and Scarlata.

83.

Goel.

84.

Selbst and Powles.

85.

Gill and Hall.

86.

“Code of Practice on Disinformation.”

87.

European Parliament, “Audiovisual Media Services Directive.”

88.

European Parliament, “Audiovisual Media Services Directive (Revision).”

89.

Romm, “Facebook Got Grilled in the UK on Privacy.”

90.

Romm, “Trump Signals He May Not Seek.”

91.

Anderson.

92.

Jasanoff; Hallin and Mancini.

Bibliography

Ananny, Mike. “Toward an Ethics of Algorithms: Convening, Observation, Probability, and Timeliness.” Science, Technology & Human Values 41, no. 1 (2016): 93–117. doi:10.1177/0162243915606523.
Ananny, Mike, and Kate Crawford. “Seeing without Knowing: Limitations of the Transparency Ideal and Its Application to Algorithmic Accountability.” New Media & Society 20, no. 3 (2018): 973–89. doi:10.1177/1461444816676645.
Anderson, Mae. “Facebook Taps Advisers for Audits on Bias and Civil Rights.” AP News, May 2, 2018. Accessed April 17, 2019. https://apnews.com/0e2760399b7c44eb8c5dc1c34dbca1a0.
Andrews, Travis M. “Billboard's Charts Used to Be Our Barometer for Music Success. Are They Meaningless in the Streaming Age?” Washington Post, July 9, 2018. Accessed November 13, 2018. https://www.washingtonpost.com/news/arts-and-entertainment/wp/2018/07/05/billboards-charts-used-to-be-our-barometer-for-music-success-are-they-meaningless-in-the-streaming-age/.
Angwin, Julia, and Surya Mattu. “Amazon Says It Puts Customers First. But Its Pricing Algorithm Doesn't.” ProPublica, September 20, 2016. Accessed December 5, 2018. https://www.propublica.org/article/amazon-says-it-puts-customers-first-but-its-pricing-algorithm-doesnt.
Benjamin, Ruha. Race after Technology: Abolitionist Tools for the New Jim Code. Medford, MA: Polity, 2019.
Bishop, Sophie. “Anxiety, Panic and Self-Optimization: Inequalities and the YouTube Algorithm.” Convergence 24, no. 1 (2018): 69–84. doi:10.1177/1354856517736978.
Bishop, Sophie. “Managing Visibility on YouTube through Algorithmic Gossip.” New Media & Society, June 15, 2019. doi:10.1177/1461444819854731.
Bolukbasi, Tolga, Kai-Wei Chang, James Zou, Venkatesh Saligrama, and Adam Kalai. “Man Is to Computer Programmer as Woman Is to Homemaker? Debiasing Word Embeddings.” ArXiv, July 21, 2016. Accessed November 27, 2019. http://arxiv.org/abs/1607.06520.
Braman, Sandra. Change of State: Information, Policy, and Power. Cambridge, MA: MIT Press, 2006.
Broussard, Meredith. “How Computers Misunderstand the World.” By Angela Chen. Verge. May 23, 2018. Accessed December 5, 2018. https://www.theverge.com/2018/5/23/17384324/meredith-broussard-artifical-unintelligence-technology-criticism-technochauvinism.
Bucher, Taina. “The Algorithmic Imaginary: Exploring the Ordinary Affects of Facebook Algorithms.” Information, Communication & Society 20, no. 1 (2016): 1–15. doi:10.1080/1369118X.2016.1154086.
Bucher, Taina. “Cleavage Control: Stories of Algorithmic Power in the Case of the YouTube ‘Reply Girls.’” In A Networked Self: Platforms, Stories, Connections, edited by Zizi Papacharissi. New York: Routledge, 2018a.
Bucher, Taina. If … Then: Algorithmic Power and Politics. New York: Oxford University Press, 2018b.
Burrell, Jenna. “How the Machine ‘Thinks’: Understanding Opacity in Machine Learning Algorithms.” Big Data & Society 3, no. 1 (2016). doi:10.1177/2053951715622512.
Cheney-Lippold, John. We Are Data: Algorithms and the Making of Our Digital Selves. New York: New York University Press, 2017.
Competition Bureau. Big Data and Innovation: Key Themes for Competition Policy in Canada. Gatineau: Competition Bureau, 2018. Accessed November 27, 2019. http://epe.lac-bac.gc.ca/100/201/301/weekly_acquisitions_list-ef/2018/18-09/publications.gc.ca/collections/collection_2018/isde-ised/Iu54-66-2018-eng.pdf.
Cotter, Kelley. “Playing the Visibility Game: How Digital Influencers and Algorithms Negotiate Influence on Instagram.” New Media & Society, December 14, 2018. doi:10.1177/14614 44818815684.
Crawford, Kate. “The Hidden Biases in Big Data.” Harvard Business Review, April 1, 2013. Accessed November 27, 2019. https://hbr.org/2013/04/the-hidden-biases-in-big-data.
Deeth, Dan. “Over 70% of North American Traffic Is Now Streaming Video and Audio.” Sandvine, December 7, 2015. Accessed November 16, 2018. https://www.sandvine.com/press-releases/blog/sandvine-over-70-of-north-american-traffic-is-now-streaming-video-and-audio.
Diakopoulos, Nicholas. Algorithmic Accountability Reporting: On the Investigation of Black Boxes. New York: Tow Center for Digital Journalism, 2014. Accessed November 27, 2019. https://academiccommons.columbia.edu/doi/10.7916/D8ZK5TW2
Draude, Claude, Goda Klumbyte, Phillip Lücking, and Pat Treusch. “Situated Algorithms: A Sociotechnical Systemic Approach to Bias.” Online Information Review (November 12, 2019) (ahead-of-print). doi:10.1108/OIR-10-2018-0332.
Dubois, Elizabeth, and Grant Blank. “The Echo Chamber Is Overstated: The Moderating Effect of Political Interest and Diverse Media.” Information, Communication & Society 21, no. 5 (2018): 729–45. doi:10.1080/1369118X.2018.1428656.
Elmer, Greg, Ganaele Langlois, and Joanna Redden, eds. Compromised Data: From Social Media to Big Data. New York: Bloomsbury Academic, 2015.
Eubanks, Virginia. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. New York: St. Martin's Press, 2018.
European Commission. “Code of Practice on Disinformation.” September 26, 2018. Accessed October 25, 2019. https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=54454.
European Parliament. “Audiovisual Media Services Directive, Directive 2010/13/EU.” 2010. Accessed October 25, 2019. https://eur-lex.europa.eu/legal-content/EN/ALL/?uri=CELEX:32010L0013.
European Parliament. “Audiovisual Media Services Directive (Revision), Directive (EU) 2018/1808.” 2018. Accessed October 25, 2019. https://eur-lex.europa.eu/eli/dir/2018/1808/oj.
Fletcher, Richard, and Rasmus Kleis Nielsen. “Paying for Online News: A Comparative Analysis of Six Countries.” Digital Journalism 5, no. 9 (2016): 1173–91. doi:10.1080/21670811.2016.1246373.
Freedman, Des. The Politics of Media Policy. Cambridge, UK: Polity, 2008.
Gay, Paul du, Stuart Hall, Linda Janes, Anders Koed Madsen, Hugh Mackay, and Keith Negus. Doing Cultural Studies: The Story of the Sony Walkman. 2nd ed. London: SAGE, 2013.
Gill, Navdeep, and Patrick Hall. An Introduction to Machine Learning Interpretability. Oakville, ON: O'Reilly Media, Inc., 2018.
Gillespie, Tarleton. “The Relevance of Algorithms.” In Media Technologies, edited by Tarleton Gillespie, Pablo J. Boczkowski, and Kirsten A. Foot. Cambridge, MA: MIT Press, 2014.
Goel, Vindu. “India Curbs Power of Amazon and Walmart to Sell Products Online.” New York Times, December 26, 2018, sec. Technology. Accessed November 22, 2019. https://www.nytimes.com/2018/12/26/technology/india-amazon-walmart-online-retail.html.
Gray, Mary L., and Siddharth Suri. Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass. New York: Houghton Mifflin Harcourt, 2019.
Hallin, Daniel C., and Paolo Mancini. Comparing Media Systems: Three Models of Media and Politics. Communication, Society, and Politics. New York: Cambridge University Press, 2004.
Helberger, Natali. “Exposure Diversity as a Policy Goal.” Journal of Media Law 4, no. 1 (2012): 65–92. doi:10.5235/175776312802483880.
Helberger, Natali. “On the Democratic Role of News Recommenders.” Digital Journalism 7, no. 8 (June 12, 2019): 1–20. doi:10.1080/21670811.2019.1623700.
Helberger, Natali, Kari Karppinen, and Lucia D'Acunto. “Exposure Diversity as a Design Principle for Recommender Systems.” Information, Communication & Society 21, no. 2 (2018): 191–207. doi:10.1080/1369118X.2016.1271900.
Henderson, Fergus. “Software Engineering at Google.” ArXiv, January 31, 2017. Accessed November 27, 2019. https://arxiv.org/abs/1702.01715.
Hindman, Matthew Scott. The Internet Trap: How the Digital Economy Builds Monopolies and Undermines Democracy. Princeton, NJ: Princeton University Press, 2018.
Hoffmann, Anna Lauren. “Where Fairness Fails: Data, Algorithms, and the Limits of Antidiscrimination Discourse.” Information, Communication & Society 22, no. 7 (2019): 900–15. doi:10.1080/1369118X.2019.1573912.
Hunt, Robert. “The Heart's Content: Media and Marketing after the Attention Economy.” Master's Thesis, Concordia University, 2018. Accessed November 27, 2019. https://spectrum.library.concordia.ca/983653/.
Introna, Lucas D., and Helen Nissenbaum. “Shaping the Web: Why the Politics of Search Engines Matters.” Information Society 16, no. 3 (2000): 169–85. doi:10.1080/01972240050133634.
Jannach, Dietmar, Markus Zanker, Alexander Felfernig, and Gerhard Friedrich. Recommender Systems: An Introduction. Cambridge, UK: Cambridge University Press, 2010.
Jasanoff, Sheila. Designs on Nature: Science and Democracy in Europe and the United States. Princeton, NJ: Princeton University Press, 2007.
Just, Natascha, and Michael Latzer. “Governance by Algorithms: Reality Construction by Algorithmic Selection on the Internet.” Media, Culture & Society 39, no. 2 (2017): 238–58. doi:10.1177/0163443716643157.
Kalogeropoulos, Antonis, Samuel Negredo, Ike Picone, and Rasmus Kleis Nielsen. “Who Shares and Comments on News? A Cross-National Comparative Analysis of Online and Social Media Participation.” Social Media + Society 3, no. 4 (2017): 1–12. doi:10.1177/2056305117735754.
Keller, Michael H. “The Flourishing Business of Fake YouTube Views.” New York Times, August 11, 2018, sec. Technology. Accessed November 9, 2018. https://www.nytimes.com/interactive/2018/08/11/technology/youtube-fake-view-sellers.html.
Langlois, Ganaele. Meaning in the Age of Social Media. New York: Palgrave Macmillan, 2014.
Lascoumes, Pierre, and Patrick Le Gales. “Introduction: Understanding Public Policy through Its Instruments? From the Nature of Instruments to the Sociology of Public Policy Instrumentation.” Governance 20, no. 1 (2007): 1–21. doi:10.1111/j.1468-0491.2007.00342.x.
Lascoumes, Pierre, Louis Simard, and Jill McCoy. “Public Policy Seen through the Prism of Its Instruments.” Revue Française de Science Politique 61, no. 1 (2011): 5–22.
Lash, Scott, and Bogdan Dragos. “An Interview with Philip Mirowski.” Theory, Culture & Society 33, no. 6 (2016): 123–40. doi:10.1177/0263276415623063.
Lobato, Ramon, and Alexa Scarlata. Australian Content in SVOC Catalogs: Availability and Discoverability. Melbourne, Australia: RMIT University, 2017.
Luka, Mary Elizabeth, and Mélanie Millette. “(Re)Framing Big Data: Activating Situated Knowledges and a Feminist Ethics of Care in Social Media Research.” Social Media + Society 4, no. 2 (April 1, 2018): 1–10. doi:10.1177/2056305118768297.
Massanari, Adrienne. “#Gamergate and the Fappening: How Reddit's Algorithm, Governance, and Culture Support Toxic Technocultures.” New Media & Society 19, no. 3 (2017): 329–46.
McKelvey, Fenwick. Internet Daemons: Digital Communications Possessed. Electronic Mediations. Minneapolis: University of Minnesota Press, 2018.
McKelvey, Fenwick, and Robert Hunt. “Discoverability: Toward a Definition of Content Discovery through Platforms.” Social Media + Society 5, no. 1 (2019). doi:10.1177/2056305118 819188.
Mittelstadt, Brent Daniel, Patrick Allo, Mariarosaria Taddeo, Sandra Wachter, and Luciano Floridi. “The Ethics of Algorithms: Mapping the Debate.” Big Data & Society 3, no. 2 (2016). doi:10.1177/2053951716679679.
Möller, Judith, Damian Trilling, Natali Helberger, and Bram van Es. “Do Not Blame It on the Algorithm: An Empirical Assessment of Multiple Recommender Systems and Their Impact on Content Diversity.” Information, Communication & Society 21, no. 7 (2018): 959–77. doi:10.1080/1369118X.2018.1444076.
Montgomery, Blake. “Fans Are Spoofing Spotify with ‘Fake Plays,’ and That's a Problem for Music Charts.” BuzzFeed News, September 13, 2018. Accessed November 9, 2018. https://www.buzzfeednews.com/article/blakemontgomery/spotify-billboard-charts.
Musiani, Francesca. “Governance by Algorithms.” Internet Policy Review 2, no. 3 (2013). doi: 10.14763/2013.3.188.
Myers West, Sarah. “Censored, Suspended, Shadowbanned: User Interpretations of Content Moderation on Social Media Platforms.” New Media & Society 20, no. 11 (November 1, 2018): 4366–83. doi:10.1177/1461444818773059.
Neff, Gina, and David Stark. “Permanently Beta: Responsive Organization in the Internet Era.” In Society Online: The Internet in Context, edited by Philip N. Howard and Steve Jones, 173–88. Thousand Oaks, CA: SAGE Publications, 2004.
Nissenbaum, Helen. “Computing and Accountability.” Communications of the ACM 37, no. 1 (1994): 72–80.
Noble, Safiya Umoja. Algorithms of Oppression: How Search Engines Reinforce Racism. New York: New York University Press, 2018.
O'Callaghan, Derek, Derek Greene, Maura Conway, Joe Carthy, and Pádraig Cunningham. “Down the (White) Rabbit Hole: The Extreme Right and Online Recommender Systems.” Social Science Computer Review 33, no. 4 (2015): 459–78. doi:10.1177/0894439314555329.
O'Neil, Cathy. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York: Crown, 2016.
Orphanides, K. G. “Children's YouTube Is Still Churning out Blood, Suicide and Cannibalism.” Wired UK, March 23, 2018. Accessed November 9, 2018. https://www.wired.co.uk/article/youtube-for-kids-videos-problems-algorithm-recommend.
Pariser, Eli. The Filter Bubble: What the Internet Is Hiding from You. New York: Penguin Press, 2011.
Pasquale, Frank. The Black Box Society: The Secret Algorithms That Control Money and Information. Cambridge, MA: Harvard University Press, 2015.
Perrin, Andrew, and Jingjing Jiang. “A Quarter of Americans Are Online Almost Constantly.” Pew Research Center, March 14, 2018. Accessed November 16, 2018. http://www.pewresearch.org/fact-tank/2018/03/14/about-a-quarter-of-americans-report-going-online-almost-constantly/.
Plantin, Jean-Christophe, Carl Lagoze, Paul N. Edwards, and Christian Sandvig. “Infrastructure Studies Meet Platform Studies in the Age of Google and Facebook.” New Media & Society 20, no. 1 (2016): 293–310. doi: 10.1177/1461444816661553.
Portugal, Ivens, Paulo Alencar, and Donald Cowan. “The Use of Machine Learning Algorithms in Recommender Systems: A Systematic Review.” Expert Systems with Applications 97 (2018): 205–27. doi:10.1016/j.eswa.2017.12.020.
Powles, Julia, and Helen Nissenbaum. “The Seductive Diversion of ‘Solving’ Bias in Artificial Intelligence.” Medium, December 7, 2018. Accessed December 12, 2018. https://medium.com/s/story/the-seductive-diversion-of-solving-bias-in-artificial-intelligence-890df5e5ef53.
Puppis, Manuel, and Dwayne Winseck. “Platform Regulation Inquiries, Reviews and Proceedings Worldwide.” September 19, 2019. Accessed November 22, 2019. https://docs.google.com/document/d/1AZdh9sECGfTQEROQjo5fYeiY_gezdf_11B8mQFsuMfs/.
Raboy, Marc. Missed Opportunities: The Story of Canada's Broadcasting Policy. Montreal: McGill-Queen's Press, 1990.
Rieder, Bernhard, Ariadna Matamoros-Fernández, and Òscar Coromina. “From Ranking Algorithms to ‘Ranking Cultures’: Investigating the Modulation of Visibility in YouTube Search Results.” Convergence: The International Journal of Research into New Media Technologies 24, no. 1 (2018): 50–68. doi:10.1177/1354856517736982.
Roberts, Sarah T. Behind the Screen: Content Moderation in the Shadows of Social Media. New Haven, CT: Yale University Press, 2019.
Rogers, Richard, ed. Preferred Placement: Knowledge Politics on the Web. Maastricht, the Netherlands: Jan Van Eyck Editions, 2000.
Romm, Tony. “Facebook Got Grilled in the UK on Privacy—While Congress Got into a Shouting Match with Diamond and Silk.” Washington Post, April 26, 2018. Accessed April 17, 2019. https://www.washingtonpost.com/news/the-switch/wp/2018/04/26/diamond-and-silk-came-to-congress-and-they-all-started-screaming-at-each-other-about-facebook/.
Romm, Tony. “Trump Signals He May Not Seek to Regulate Google Search Results.” Washington Post, August 29, 2018. Accessed April 17, 2019. https://www.washingtonpost.com/technology/2018/08/29/trump-signals-he-may-not-seek-regulate-google-search-results/.
Rosenblat, Alex. Uberland: How Algorithms Are Rewriting the Rules of Work. Berkeley: University of California Press, 2018.
Selbst, Andrew D., and Julia Powles. “Meaningful Information and the Right to Explanation.” International Data Privacy Law 7, no. 4 (2017): 233–42. doi:10.1093/idpl/ipx022.
Smith, Brent, and Greg Linden. “Two Decades of Recommender Systems at Amazon.com.” IEEE Internet Computing 21, no. 3 (June 2017): 12–18.
Sutton, Kelsey. “Facebook Video Ad Metric Lawsuit Prompts Publishers to Revisit the ‘Pivot to Video.’” AdWeek, October 19, 2018. Accessed November 9, 2018. https://www.adweek.com/digital/facebook-video-ad-metric-lawsuit-prompts-publishers-to-revisit-the-pivot-to-video/.
Thorson, Kjerstin, Kelley Cotter, Mel Medeiros, and Chankyung Pak. “Algorithmic Inference, Political Interest, and Exposure to News and Politics on Facebook.” Information, Communication & Society (2019): 1–18. doi:10.1080/1369118X.2019.1642934.
Toff, Benjamin, and Rasmus Kleis Nielsen. “‘I Just Google It’: Folk Theories of Distributed Discovery.” Journal of Communication 68, no. 3 (2018): 636–57. doi:10.1093/joc/jqy009.
Tran, Kevin. “Nielsen Adds Facebook, YouTube, and Hulu to Digital Ratings.” Business Insider, August 16, 2017. Accessed November 9, 2018. https://www.businessinsider.com/nielsen-adds-facebook-youtube-and-hulu-to-digital-ratings-2017-8.
Tripodi, Francesca. “Searching for Alternative Facts.” Data & Society Research Institute, May 16, 2018. Accessed September 16, 2018. https://datasociety.net/output/searching-for-alternative-facts/.
Tufekci, Zeynep. “YouTube, the Great Radicalizer.” New York Times, March 10, 2018, sec. Opinion. Accessed March 12, 2018. https://www.nytimes.com/2018/03/10/opinion/sunday/youtube-politics-radical.html.
UNESCO. “2018 Global Report–Re|Shaping Cultural Policies.” UNESCO, 2018. Accessed September 27, 2019. https://en.unesco.org/creativity/global-report-2018.
Vranica, Suzanne, and Jack Marshall. “Facebook Overestimated Key Video Metric for Two Years.” Wall Street Journal, September 22, 2016, sec. Business. Accessed November 9, 2018. https://www.wsj.com/articles/facebook-overestimated-key-video-metric-for-two-years-1474586951.
Waldie, Paul, and Tim Kiladze. “Facebook Gave RBC, Other Companies Preferential Access to Users' Data, Documents Show.” Globe and Mail, December 5, 2018. Accessed December 6, 2018. https://www.theglobeandmail.com/world/article-facebook-gave-rbc-other-companies-preferential-access-to-users-data/.
Winning, Lisa. “It's Time to Prioritize Diversity across Tech.” Forbes, March 13, 2018. Accessed December 5, 2018. https://www.forbes.com/sites/lisawinning/2018/03/13/its-time-to-prioritize-diversity-across-tech/.
Yeung, Karen. “Algorithmic Regulation: A Critical Interrogation.” Regulation & Governance 12, no. 4 (2018): 505–23. doi:10.1111/rego.12158.

Author notes

Both authors contributed equally; names are in alphabetical order.

This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.