Abstract

Which legal instrument can effectively address current challenges in social media governance and how do companies take their share, shifting away from opaque enforcement of terms of services and increasingly copying governmental structures? In a first step, this article describes and analyzes the way that states address hate speech and misinformation in their respective regulatory projects. Secondly, it examines how social media platforms sanction unwanted content and integrate (or plan on integrating) procedural rules such as appeal and due process principles in their moderation policies. Large social media platforms tend to adopt new structures that resemble administrative law—an uncommon development for non-state actors.

“Toxic Twitter,” “Junk News,” “Zero Tolerance,” “Biggest problem for democracy,” “Fresh Hell”—the headlines on the big social media platforms and the way they handle (or not) hate speech and misinformation have been negative for a considerable amount of time. This especially increased since their possible influence on democratic elections was unveiled (e.g., by the Cambridge Analytica scandal) and more disturbing content made its way into users' newsfeeds, such as the live streaming of the Christchurch shooting in New Zealand.1 After a period of time during which states were trying to push tech companies to find remedies, but essentially both sides were rather passive, things have changed over the past two years. Through different types of regulation on the one side and through more private ordering on the other side, all actors have noticeably become more active. Essentially, state and non-state actors are more eager to enforce countermeasures than before. In this article, I will show that, although a higher number of removals is not desirable per se, the synergies that happen as a result of the policies outlined in the following are beneficial to the normative goals pursued.

States address different concerns regarding online communication through different types of regulations, which are generally subject to criticism due to their possible effects on freedom of expression and information.2 In his work about the contentious role of governments, Benkler described the German regulation as follows: “The most aggressive effort in a liberal democracy to respond to disinformation and hate speech on social media by regulating social media platforms is the German NetzDG law that became effective on January 1, 2018.”3 But is the regulation of social media platforms really “aggressive,” more “aggressive” than private ordering, and does it have negative effects on democracy? In general, regulation is defined as a “rule or directive made and maintained by an authority,”4 hence subject to constitutional proviso and bound to the principles of necessity and proportionality. If an individual freedom enshrined in the constitution is formulated in an affirmative manner (e.g., “Every person shall have the right …”), the legislator might be obliged to adopt laws protecting it—as opposed to a coercive formulation (e.g., “shall pass no law”) that protects each individual from legislative activities in that matter but not from legislative passivity. In general, regulatory interventions in speech matters are viewed critically in the United States (which has led to a broad comparative scholarship on the United States and the German approaches with regards to hate speech). In the first section, the article does not aim at comparing laws in a formal way, but rather at examining two distinct regulatory approaches as attempts to tackle current issues of the digital public sphere, namely the phenomena of hate speech and of misinformation. The goal here is to examine less the laws in substance but rather the mechanisms of regulatory instruments adopted in recent years, namely, the German Network Enforcement Act (so-called and hereinafter NetzDG) and the EU Code of Practice on Misinformation.

Hate speech and misinformation are often named together when discussing the challenges of online communication although they are two very distinct issues, inter alia in the form of harm they cause and the subjects of protection they target. They might, however, have in common that they generate user engagement and have therefor a similar ambiguous role in the attention economy. In a second step, this article provides an overview of the measures endorsed by social media platforms5 and shows that they are increasingly similar to the instruments used by the state and to principles enshrined in administrative law. These developments seem to me as moving toward each other. While the state is potentially delegating power to platforms, the platforms sound less and less like private companies but rather like administrative authorities. Additionally, the reasons for the increasing similarities are double-tracked: states have realized they need to take action given how urgent the problems are and companies feel the imminent threat of regulations, which makes them more susceptible to worry about the alignment of their policies with lawmakers' wishes.

The Regulatory Approach to Current Challenges

Hate Speech and Germany

The Problem: Hate Speech

Hate speech is generally defined as “speech expressing hatred of a particular group of people.”6 In countries where laws prohibit hate speech, different definitions can be found in their respective penal codes. A broader definition of hate speech in the legal context is “words which are deliberately abusive and/or insulting and/or threatening and/or demeaning directed at members of vulnerable minorities, calculated to stir up hatred against them.”7 Forbidding this type of speech is not a German peculiarity due to its history, but rather common in many western democracies where restricting fundamental rights in a proportionate manner is legitimate.8 In Germany, freedom of speech is formulated in an affirmative way (“Every person shall have the right freely to express and disseminate his opinions in speech, writing and pictures (…).”9) in Art. 5 (1) Basic Law. By constitutional proviso, this fundamental right may be restricted (“These rights shall find their limits in the provisions of general laws, in provisions for the protection of young persons, and in the right to personal honour.”), if the restricting law does not infringe the principles of proportionality (“Verhältnismäßigkeit”) and of interdependence (“Wechselwirkung”) with regards to speech-targeting purposes.10 However, the law does not need to be speech-targeting to be subject to the scrutiny of Art. 5 (2) Basic Law.

In the case of hate speech, there is no general provision but several articles in the German criminal code (“Strafgesetzbuch,” hereinafter StGB), the most prominent in the public debate being the criminal liability for denying the Holocaust under sec. 130 StGB. The latter forbids all incitement to hatred, in a “manner capable of disturbing the public peace.”11 Beyond this provision, due to Germany's unforgivable history and legacy, other provisions restrict speech to protect human dignity, which is enshrined in Art. 1 Basic Law and cannot be restricted. At the same time, the scope of protection of freedom of expression is very broad and Art. 5 (1) 2 Basic Law strictly prohibits censorship by the State. Hence, there is not less freedom due to speech-restricting laws but rather another understanding of hate speech harming the public discourse, than, for example, under the First Amendment's free speech clause.

This constitutional foundation is essential to understand that large social media platforms were blamed by significant parts of the German public for allowing hate speech.12 Instead of prioritizing individual liberty within the marketplace of ideas, the German model sets limits when human dignity or other fundamental rights are at risk. Since the biggest social media platforms are deeply rooted in the US system, they—in contrast—tend to have a different understanding of free speech.13 The latter might have an overspill effect on other platforms that are perhaps not based in the United States but follow the lead from Silicon Valley. Either way, the concept of free speech according to the First Amendment and the absolute freedom of platforms to govern their users' speech relating thereto are predominant in the area of content moderation.14 Adding to this, tackling hate speech on a global scale, across different jurisdictions and with the help of reviewers who are not always familiar with the national context is an enormous challenge.15 The rise of hate speech on social media platforms also affected Germany, with a (perceived) peek of hateful content in late 2015 when many refugees from Syria arrived in Europe and the German chancellor decided to grant them asylum. Merkel's decision marks the beginning of a significant increase in online hate speech, which is mainly attributed to racists and members of far-right parties, and the fear thereof.16 The issue of hate speech online became increasingly pressing because the platforms did not remove content fast enough or sometimes only after being publicly exposed.17 It became more evident by that time that the way in which the platforms moderated user-generated content was very opaque18 and that it was difficult to assess if they were doing everything they could to remove illegal (by German standards) content. Germany and other EU Member States first hoped to bring the intermediaries to solve the problem of hate speech, but these attempts of governing the platforms via self-regulation failed.19 In a nutshell, it became increasingly difficult for the German government to justify that the biggest social media platforms would allow “Nazi content” or not take it down quickly,20 and that the government had no hold.

The Reaction: The NetzDG

In his speech to the German Parliament in June 2017, the former Minister of Justice, Heiko Maas, explained that, given the increase of hate speech online by 300 percent between 2015 and 2017, and given the unwillingness of the big social media platforms to remove this type of content, the government had to regulate.21 He then presented the first draft of the Network Enforcement Act (hereinafter NetzDG)—a law that primarily obliges social media platforms to ensure that “manifestly unlawful” content would be removed within 24 hours. After a few amendments, the law was passed in late 2017 and came into force on January 1, 2018. The goal is to curb the spread of illegal content on social media by forcing the platforms to act upon illegal content according to national regulation and not solely according to the community standards they have.

To comply with the NetzDG, all platforms that count more than two million users in Germany need to implement a user-friendly complaint procedure and to remove “manifestly unlawful” within 24 hours, or seven days for less clear cases. Unlawful content is content that violates existing laws such as libel, defamation, incitement to hatred, and so on. No new law was written to fight hate speech online, in fact, the NetzDG “only” lists the provisions under which hate speech is forbidden and obliges the platforms to enforce them. The rationale of the NetzDG is to make sure that illegal content will not stay online more time than necessary and cause harm. Through this regulatory intervention, the German government hoped to tame the phenomenon of verbal coarsening online, respectively hate speech and its presumed negative effects on the public discourse. First transparency reports published by the companies concerned (by the NetzDG) show that hate speech is still the main reason for complaint and accordingly for takedown. The reports also show that the vast majority of complaints were taken care of within 24 hours and only (relatively) few cases were delegated to external institutions. The reports also give a lot of insights about the weak points of the NetzDG,22 but in general, the coming into force of the NetzDG provoked faster reactions to unwanted content (for better or worse23). This, in turn, raises questions as to the reasonable justifications of takedown decisions and possible overblocking effects due to the high fines (up to 50 million euros) for noncomplying with the NetzDG's obligations.24

Delegating Power to Platforms

The NetzDG has been under attack from the very beginning, for multiple reasons, some already mentioned earlier.25 One main concern is the “privatization” of the judiciary as a side effect of the platforms complying with the obligation to remove unlawful content within 24 hours.26 The obligation to remove unlawful content is in itself not problematic, but who gets to decide if user-generated content is “manifestly” unlawful? By delegating this task to social media platforms, the State has factually given the responsibility to decide upon the lawfulness of content to the reviewers in charge of content moderation. While the regulatory intent was to make the platforms take more responsibility for problematic and illegal content, it, as a result, led to a significant increase of power on the platforms' side. Given the complexity of the task (to apply national criminal law and interpret vague legal terms), many criticized that it is being delegated to the private companies running the social media platforms.27

Whether this was really causal for the takedown decisions in the time following the coming into force of the NetzDG, and whether it had an effect of over-removal of content and, consequently, chilling effects on users remains unclear.28 The German lawmaker included the obligation to publish biannual transparency reports for all platforms affected by the NetzDG. However, because the platforms implemented the NetzDG in different ways, the reports do not all provide reliable numbers29 and moreover suggest the opposite of over-removal.30 The question raised by the NetzDG is nevertheless more fundamental: may a state delegate this type of decision to the private host of online speech and leave this decision at someone's discretion who is not familiar with national law? When it comes to the specific elements of an offense, one would need to know and practice criminal law, and consider the context of the content.31 Also, using too vague terms such as “manifestly” can be problematic.32 Clear legal definitions and specific criteria are mandatory to constrain a platform's discretion.33 In turn, platforms should commit to disclosing their definitions and the way they put legal requirements into practice.34

The central finding of this is that the State delegates power to private actors to combat hate speech but is not precise enough in the wording of such transfer provisions. It hands over the power of evaluation and interpretation, creating a quasi-additional first instance, upstream from the Judiciary. Of course, citizens can still take action and contest a takedown decision in court. The initial takedown or stay up is nonetheless decided by the platform and, all in all, the user is subject to more rules in cyberspace—a development predicted by Palfrey. He identified four phases of Internet regulation, according to which we are now in phase 4 that is the phase of “contested access” (from 2010 onward).35 What Palfrey thought of, back in 2010, and how he predicted the phase of “contested access” is not so far from reality today. The Internet is no longer “free and lawless” and citizens see the digital as an integral part of their daily life. States have become more active in regulating cyberspace directly or indirectly. Intermediaries have to keep up with different regulatory frameworks and tend toward one-size-fits-all solutions that can collectively be more restrictive than single national regulations.36

Fake News and the EU

The Problem: Misinformation

Another issue on social media platforms and other intermediaries is the dissemination of misinformation, often referred to as fake news (although this term is unprecise and rather a buzzword). Fake news can be defined as “misinformation designed to mislead readers by looking like and coming across as traditional media.”37 Technical means such as social bots can be used to speed up sharing, make it more effective, or make it anonymous, without human intervention.38 When false information is designed and spread with the intention to mislead the recipient of the information, it is called disinformation.39 The element of intent is key when discussing the risks of disinformation for representative democracies because it raises the question as to the level of protection of freedom of expression and information as a precondition for participating in a democratic system. The intent of spreading false information is closely interwoven with the assessment of user data in order to identify target groups necessary to place targeted political advertising. However, political microtargeting and the associated phenomenon of “dark ads,” that is, political advertising that is only visible to certain users, do not fall within the narrow definition of fake news.

We are only now assessing to which extent disinformation might have affected past elections. Although the increasing usage of social media in the “leave” campaign for Brexit and during the US elections 2016 were critically observed by European lawmakers, the role social media platforms played appeared to its full extent after the French elections in 2017,40 and the real turning point was the Cambridge Analytica scandal in March 2018.41 Fake news is not a new phenomenon, but the issue has become more pressing since the increasing availability of data. The disseminators of fake news have benefited from social networks to reach more people, and from the technology that enables a faster distribution and can make it more difficult to distinguish fake from hard news.42 Through its loose privacy policies, Facebook had given access to users' data in an unprecedented way: the private information of more than 50 million individuals had been used to predict and influence the voting choices during the 2016 US presidential elections.43

The research conducted afterward showed that the harvested data was used to target voters with political ads and could be used to manipulate them, in a way that would be invisible to the public, bearing a high risk for democracy. The fact that a company like Facebook would allow third parties to use behavioral advertising for political purposes, or at least not take the necessary steps to protect users' data was shocking to everyone who learned about the privacy breach in March 2018.44 It also became clearer that Facebook's business model, that is, keeping users on the platform at all costs (attention economy), was more favorable to certain—more engaging—types of content and this might include fake news.45 In fact, the economics of user engagement explains why stories around the events of 2016 and 2017 were designed to be provocative and to catch users' attention.46 We cannot conclude from this observation that Facebook keeps fake news online on purpose but it might indicate that the algorithm favors content with high engagement potential. In recent elections in Brazil, India, and Israel, social media have been increasingly used to purvey false information. Misinformation does not create unseen social divides, instead, it is used to fuel existing tensions between political, social, or religious groups.47 It will, therefore, serve populist campaigns more than others. In Brazil's case, the use of the messenger service WhatsApp to propagate misinformation about the rivals of far-right candidate Jair Bolsonaro could have led to his victory.48

The impact of misinformation on election results is contested because it is difficult to measure. Only the engagement of users with different types of content can be quantified, not the impact it has in their political opinion, nor their reasoning when choosing a candidate or a party.49 Addressing this issue is particularly complicated for lawmakers because the problem of misinformation lacks a factual basis. The true impact on electoral behavior is hard to measure, which is why one has to be very careful when designing and enacting regulatory frameworks. Another challenge is the possible backfire of a regulation targeting fake news: it could have a negative effect on the freedoms it is aiming to protect.50 When it ought to protect the free formation of public opinion and the integrity of public discourse, such a regulatory intervention would also combat the lack of trust in media outlets, the general mistrust in traditional gatekeepers and politics. It would attempt to intervene in the “feedback loop”51 of misinformation without disrupting the free flow of information.

The Reaction: EU Code of Practice on Disinformation

Fake news can be assigned neither to a political color nor to a certain country of origin, even if there are indications that they partly come from abroad with the aim to interfere in the internal affairs of other countries, that is, to influence internal politics possibly to their advantage.52 In light of the elections for the EU Parliament in May 2019, disinformation and potential influence from right-wing populists were at the center of attention. Recent work has shown that the EU was not well enough prepared against disinformation,53 even if some experts say we should not overestimate the power of “fake news” in Europe by underestimating the ability of content recipients to identify false information.54 It was nevertheless a major concern, especially because the voter turnout for the EU Parliament has traditionally been below average and could have been pulled down by anti-EU content. Until recently, except for France, the EU and its Member States have not taken any action, leaving it to journalists to unpack, crosscheck, and reveal disinformation, thus making them the driving force when it comes to counter misinformation. Due to recent elections in other countries and the fear of intrusion from abroad, Member States are now considering regulatory interventions. In December 2018, France adopted a law against information manipulation in times of elections, introducing an additional form of expedited proceedings and higher transparency requirements as to political advertising on social media platforms.55 On the supranational level, the EU and leading tech companies (hereinafter signatories) agreed upon the self-regulatory Code of Practice on Disinformation (hereinafter CPD) in September 2018.56

The CPD defines disinformation as “verifiably false or misleading information” which, cumulatively, (a) “is created, presented and disseminated for economic gain or to intentionally deceive the public”; and (b) “may cause public harm,” intended as “threats to democratic political and policymaking processes as well as public goods such as the protection of EU citizens' health, the environment or security.” The CPD is organized in five relevant fields of commitments, each tackling the problem of disinformation from a different angle and with different measures. Essentially, the CPD encourages the signatories to implement self-regulatory standards to fight disinformation.57 They shall commit to “deploying policies and processes to disrupt advertising and monetizing incentives for relevant behavior” and commit to “enable public disclosure of political advertising.” The latter “could include actual sponsor identity and amounts spent.” They also commit “to put in place clear policies regarding identity and the misuse of automated bots,” as well as “policies on what constitutes impermissible use of automated systems.” To accomplish the goal of the CPD, namely “to address the challenges posed by the dissemination of disinformation,” the various signatories are allowed to operate differently, according to their respective modus operandi. Hence, they can choose if and how they comply with the commitments of the CPD, and to which extent they will go beyond their preexisting own set of rules in this area.

Self-Regulation as a Leverage?

So far, social media platforms have not reacted favorably to the demands of governments with regard to unwanted political misinformation. Political content ahead of elections was largely accepted as opinions and not subject to takedown.58 In addition, platforms offered their services in behavioral advertisement without questioning the content and were reluctant to intervene in the area of fake or nonhuman (bots) accounts. Again, because most platforms come from a First Amendment understanding of free speech, and because the boundaries of political speech are generally difficult to draw, removing this type of content can tip over to collateral censorship, which explains their reluctance to delete.59 Nevertheless, the EU chose a self-regulatory form of governance to address the issue of misinformation. In doing so, the EU entrusted the problem to the platforms, in their field of action and at their discretion. Self-regulation is generally speaking a process “in which rules that govern market behavior are developed and enforced by the governed themselves.”60 There might be an instrument of soft law as a basis for these self-regulatory activities, that is, an agreement on the basis of which companies are bound to the terms, but it has no legislative activity as its basis.61 Codes of practice provide detailed practical guidance on how to comply with legal obligations, and the signatories consent to follow this guidance unless another, higher standard is in place.62 However, the signatories are not bound to any specific deliverable, which leads to the question of whether this type of instrument is legitimate and sufficient. If self-regulation is the first step before lawmakers intervene, then it might not be an adequate answer to the problem of misinformation because the issue is simply too urgent.63

The signatories of the EU CPD all agree on the objectives and the suggested measures to take, but there is no legal obligation to and, if so, on how to implement them. The question which arises in the following step is how do we classify measures taken on the grounds of such codes? When might state action in the form of soft law be considered an indirect infringement of fundamental rights if it is implemented by private entities?64 Soft law instruments are flexible in terms of implementation but can be questionable when it comes to fundamental rights and the right to take legal action: who is to be held accountable for the implementation of the recommendations?65 This problem is not new: cooperation between the state and non-state actors based on agreements is often criticized when hidden from the public opinion. Shall we consider enforcement by proxy an “unholy alliance” or necessary cooperation between the State and private intermediaries?66 In a contemporary concept of state action should we also include private behavior that can be attributed to the state on the basis of its intention when it adopts “soft law”?67

Interim Conclusion

In this first part, I have looked at how lawmakers respond to contemporary challenges on social media platforms such as hate speech and misinformation. In both cases, the state allocated the enforcement of rules or of guiding objectives to the platforms, or at least it allocated forms of power, as in possession of control, authority, or influence over others.68 The two use cases show that—even after regulation or agreements—the platforms still govern the evaluation and interpretation of what actually constitutes unlawful content (in the case of the NetzDG), and over what type of false information might be harmful to democracy (in case of the CPD).

The platforms' definitions of problematic phenomena such as hate speech and fake news do not always correlate with the offenses or misdemeanors targeted by the law. User-generated content that violates the law might be unlawful under national law. If it infringes a platform's standards, it is unwanted. In many cases, that type of content will be both, unwanted and unlawful, but that heavily depends on a country's laws.69 From a European perspective, it seems quite natural that freedom of expression can be limited by law. Article 10 (2) European Convention for Human Rights (ECHR), for example, stipulates that “The exercise of these freedoms, since it carries with it duties and responsibilities, may be subject to such formalities, conditions, restrictions or penalties as are prescribed by law and are necessary in a democratic society, (…).”70 Similarly to the German Basic Law, a limitation of Art. 10 (1) ECHR can be justified with the pursuit of one of the goals mentioned in Art. 10 (2) or based on the additional reason for the limitation of Art. 10 (1) 3 ECHR.71 In the United States, the scope of protection is broader: speech that does not fall in one of the few categories of unprotected speech within the First Amendment is allowed as an opinion.72 Bearing in mind that the biggest social media platforms come from the United States and that their in-house and external legal counselors are most likely to be US-trained lawyers, the foundation for a platform's policies is free speech (non-) regulation under the First Amendment.73 This brings us to the question of how corporations running social media platforms react to the challenges inherent to communication on their platforms.

The Corporate Reaction

Hate speech and misinformation are two main issues for social media platforms, which they have so far addressed with more or less strict content moderation rules.74 The latter have evolved over the past years and due to a combination of circumstances (leaks and voluntary disclosure), the world was granted more insights to their complex sets of rules.75 In addition to moderating user-generated content according to their community standards, platforms are increasingly turning to countermeasures that resemble state action: they create rules, enforce them, “punish” those who break the rules not only by taking down unwanted content but also by restricting their access to the platform, sometimes withholding or deleting accounts. At the same time, they commit to more transparency regarding the rules of enforcement, as well as due process measures such as allowing appeals. While companies like Facebook sometimes bypass regulatory intervention,76 they create an apparatus that is on the surface similar to the state. The seeming similarity lies in the choice of instruments used to enforce rules and in providing users with procedural remedies. So far, such similarities are limited to external appearances: It seems like after a time of perhaps overestimating the users' tolerance as to the mistakes made, they are now more receptive to outside pressure.77 The public opinion could form a type of counterpower that companies might bow to when they sense the users' demand for clear and transparent rules.78

Governing Speech with Private Rules

While it might sound obvious, platforms are (until now) protected by their freedom to govern their relationship with users, that is, private autonomy. Whether this relationship is considered contractual or not (depending here again on the national perspective), the terms of this agreement and their enforcement are to a great extent at the platforms' discretion.79 At the same time, platforms are facing a higher societal pressure than a few years ago to behave according to their role in the new public sphere they have in part created.80 In Germany and the jurisprudence of the European Court of Human Rights, fundamental rights such as freedom of speech and information may have a certain horizontal effect.81 The latter can occur directly or indirectly, that is, by binding private parties as such to fundamental rights or via court decisions reached between two private parties in which the judges ought to include and consider fundamental rights in their deliberation and, accordingly, in their opinion. The novelty does not lie in the doctrine of the horizontal effect itself but rather in its interpretation when it comes to platforms. Due to the platforms' important influence on communication and access to information online, courts could be more inclined to identify a direct or indirect binding effect of freedom of speech and information in cases related to content moderation. Recent decisions by German courts on district and regional level show that judges are willing to balance the rights of users and platforms when users take action against takedown decisions, generally in favor of users' rights to express themselves when the contested content is not unlawful. In the following subsections, I will elaborate on the set of measures enforced against unwanted content, showing that this enforcement has so far not been different from the common practice between two private parties, but that it might be shifting toward practices usually attributed to administrative law.

Taking Down Content

Social media platforms decide which content they allow to be published and to remain visible according to their own set of rules (“Facebook Community Standards,” “Twitter Rules,” “Reddit Content Policy,” “YouTube Community Guidelines,” etc.). These rules are generally enforced via notice and takedown, a two-step procedure mainly due to laws that limit the platforms' liability for user-generated content.82 They are based on the assumption that social media platforms act as mere content-neutral intermediaries and should not be held responsible for bad content, when in fact they govern what type of content will be available and visible according to their standards and sets of rules.83 When users flag content and submit it for review, it will be reviewed by content moderators who eventually decide about deleting it or not, according to their own interpretation (often with a different cultural background, which is another concern in this area).84 Over time, the rules have been iterated and adapted according to the needs and the developments of each community.85 When the interpretation guidelines provide no answer to a specific content problem, the reviewers have to make ad hoc decisions, which might be escalated and revised to another team (generally more skilled).86 If the content is found to violate a platform's rules the consequence is removal, that is, in priority worldwide deletion or geoblocking in cases where the violation affects only a certain country or region.87

Restricting, Withholding, or Deleting Accounts

The following measures against violations of rules are more drastic than taking down contested content and will probably only be applied when the violation happens repeatedly. One sanction can be to restrict access to a platform's service, another to withhold or delete individual accounts. Sanctioning can also result in temporarily limiting the user's ability to create posts or interact with other users. In the past, it seems that platforms were reluctant to withhold or delete profiles, sometimes even leaving it to other users to act upon profiles that would disseminate hate speech.88 In the aftermath of the events mentioned earlier, such as Brexit and other election campaigns, Twitter started massively removing unwanted accounts89 and Facebook started removing so-called “bad actors” or “coordinated inauthentic behavior,” that is, multiple pages, groups, and accounts found to be “working to mislead others about who they are and what they are doing.”90

All in all, platforms mostly react to unwanted content on an individual level: their sanctions affect single users or single accounts. But of course, they can also enforce their rules by design, hence affecting the whole community and achieving broader results. To prevent users from making use of the services in an undesired way, platforms can restrict how the users interact with the front end. For example, when a messenger service restricts the number of contacts to which a message can be sent or when a network no longer allows to add nonbefriended accounts to groups, misinformation cannot be distributed as easily as before. WhatsApp is, for instance, restricting mass propagation of information to countless contacts, in reaction to the allegations over the Brazilian and Indian elections in 2018.91 Similarly, Facebook is restricting the use of groups that “repeatedly share misinformation.”92 It also augmented the role of moderators and restricts users from randomly adding other users to groups. Hence, the individual is not exposed to group activities that she does not actively subscribe to. This type of changes is not at the users' detriment since opt in solutions are generally preferable for consumers, compared to opt out.93 Design choices affect all users in the same way and do not target speech based on its content, which makes them less right infringing than content-based sanctions.

Restricting the Usage for Unwanted Purposes

To summarize, platforms have their own set of rules when it comes to user-generated content and react differently to users violating their rules. The sanctions differ in how much they affect users, but the bigger networks use similar sanctions to counter negative phenomena such as hate speech and misinformation. Content moderation does not resemble the mechanisms of administrative law, but rather of domestic authority. The companies owning a social media platform govern them via private rules and the interpretation of these is solely at their discretion. Therefore, the relationship between a user and a platform is more comparable to an agreement to use a room under certain terms and conditions. I base this argument on the fact that users often do not know why their posts were deleted, how they can appeal this decision, and to what extent they will be affected by the platform's sanction. Because content moderation is mainly perceived as a case-by-case review and not following procedural rules, it cannot be compared (so far) with any formal interaction with the state.

The Sound of Administrative Law

Since the Cambridge Analytica scandal, Facebook has adopted a tone that resembles that of the state and its actions tend to imitate governmental behavior (when describing its internal governance structure), that is, it announced structures and procedures similar to administrative law.94 First, it published the guidelines based on which reviewers decide about how to interpret community standards. Second, it announced an independent body of appeal which could potentially empower users as to their rights to contest takedown decisions and other sanctions. These two measures lead to more exposure, hence more accountability which is more common for the state, whereas private parties are not subject to principles such as due process.95 The reasons why companies engage in a stronger internal regulation can have various reasons, but it has a general countereffect on the State's activities in terms of regulating, inspecting, and so on.96 This observation can, for example, be applied to the area of content moderation, where social media platforms fear more interference from (European) lawmakers.97 The reforms decided by Facebook throughout 2018 were motivated by multiple reasons, certainly in part to avoid more laws comparable to the NetzDG. The extent to which Facebook intends to self-regulate and to delegate a field of influence that is so crucial to its business goes beyond what companies usually commit to in their self-regulatory declarations.98

More Transparency

A main point of criticism had been that people outside of Facebook could not comprehend how community standards were interpreted and enforced.99 In some cases, the decisions about takedown seemed incoherent, which confirmed the suspicion that they were made ad hoc and applying double standards.100 Facebook increasingly committed to transparency and came forward by publishing its body of internal enforcement guidelines.101 For hate speech, it is now slightly clearer how Facebook defines hate speech and which content falls under this definition.102 The guidelines disclose that, in case of doubt, Facebook will remove content which it has identified as hate speech and where a diverging intent is not recognizable (“Where the intention is unclear, we may remove the content.”). Also, Facebook attributes higher credibility to content that was posted under clear names, assuming that such users are more likely to not post hateful content (“we believe that people are more responsible when they share this kind of commentary using their authentic identity”). Nevertheless, categories such as hate speech remain broad and their definition vague.

As to the problem of misinformation, Facebook's guidelines for moderating false information are rather short, stating that it does not remove false news but reduces “its distribution by showing it lower in the News Feed.” Measures are mainly to disrupt “economic incentives” to “propagate misinformation” and to reduce “the distribution of content rated as false.”103

Implementing a Body of Appeal

As part of the changes in the aftermath of the Cambridge Analytica scandal, Facebook's CEO Zuckerberg announced in late 2018 the implementation of an appeals process for takedown decisions and later for any decision users would be subject to.104 An appeal's process would allow users to express their disagreement with any of the measures explained in the subsection earlier, giving them the chance to contest sanction. The way that the body of appeal would decide upon users' requests would be governed by an independent oversight board.105 The appeal's process and the oversight board are two distinctive elements but closely interconnected since the oversight board's work will potentially have a direct effect on the outcome of the appeal. According to the Draft Charter, it will “be a body of independent experts who will review Facebook's most challenging content decisions—focusing on important and disputed cases. It will share its decisions transparently and give reasons for them.”106 So far, the way content decisions shall be reached has not been finalized (“how to render independent judgment on some of Facebook's most important and challenging content decisions.”107). The Board will probably work in smaller panels populated by board members reflecting both diversity and local concerns.108 Many questions still need to be answered until an institution of that kind will actually be fully operational (planned for late 2019).

Although these developments altogether sound user-friendly and like striving for a more transparent and fair form of Facebook, one must bear in mind several deficits: the possibility to contest a decision does not automatically mean that users will know on which grounds the initial decision was taken in the first place. It also does not guarantee more transparency in terms of explainability as to granting or dismissing the appeal.109 The oversight board will not be able to review all of the appeals filed because there are simply too many takedown decisions. Instead, the aggrieved user will probably be allowed to file an appeal that will be submitted to content reviewers. Only in specific cases (probably those of fundamental significance for future cases110) will they be submitted to the Oversight Board. If the oversight board is not regarded as independent, its decisions will not ameliorate the way content is moderated on Facebook.

After a period of consultation, Facebook expressed the wish to have a fully operational oversight board by the end of 2019. The “Global Feedback and Input on the Facebook Oversight Board on Content Decisions” report (published in June 2019) shows that the selection process of the board members was one main concern expressed by the experts consulted. Per this report, Facebook plans to select the first round of approximately 40 board members for a period of three years and leave it up to them to select their successors.111 The cases for review will “come before the Board through two mechanisms: Facebook-initiated requests and user-initiated appeals.”112 As to the effect of the decisions made by the Board, Facebook commits to be fully bound “on the specific content brought for review” and recognizes that the Board's decisions “could potentially set policy moving forward,” which means that they will have “precedential weight” while still allowing the Board to differ (“flexibility”).113 It remains to be seen which parts of this report will eventually be perpetuated as the structure of the Board. Lastly, there is no certainty about how appeals will be granted and remedies implemented. Put-back rights do not per se include the right to be visible in the same way as a post would have been, without the initial takedown. Of course, a remedy granted after a successful appeal could include being published where the initial content was supposed to appear, but considering the high density of online content, such a form of remedy would not be sufficient because of the lack of visibility. The appellee might commit to republish the content and to make it visible in the way it was initially published—assessing the implementation of such a commitment will still be very difficult given the nature of each user's individual algorithmic selection. This issue of put-back right versus visibility is often been overlooked, although it is so central when thinking about the remedies against unsubstantiated removals.

The novelty of Facebook's Oversight Board does not lay with the instauration of a separate board or a commission overseeing the activities of a company's board of executives. This type of control mechanisms can be found in countries following a two-tier system in which the supervisory board is composed of nonexecutive directors.114 It is the setup of a sort of Court, whose decisions have an impact on the company's daily business and that is open to users' requests, that is in itself unprecedented and shows the wish to gain the public's trust at last. The judiciary and its independence from other sources of power, as an inherent feature of its own power, is something that one would want to follow because it symbolizes high integrity and legitimacy when it comes to the decisions it takes and, in general, checks and balances. Hence, it seems like an attractive model to follow. The social logic of courts has been studied in the past and needs to take a bigger part in this conversation.115 In order to take credit for giving up responsibility and power, that is, subordinating content moderation policies to decisions of a board of people who are independent of the corporation, one would need to follow certain guidelines. It is not enough to call such oversight body “independent” and to delegate accountability for unpopular decisions. This question, however, is more substantive: how does one define the judiciary's independence and which requirements would a private body have to fulfill to be truly comparable to a court, or rather is that even conceivable in our legal systems. Just as state authority cannot be endorsed by a private actor (or only up to a certain degree), the explicit authority of the judiciary is beyond question.

Conclusion: From Private Ordering to a New Administrative Law?

The Described Actions by States and Companies Create Synergies

While states are still struggling to identify adequate answers to the regulatory challenges posed by communication, intermediaries and especially social media platforms are increasingly considering the implementation of procedural rules in order to assume their responsibilities. Is the platforms' enforcement of their own rules becoming the administrative law of this genre of the public sphere? Are we witnessing a real shift of responsibility through the private actors, as many scholars have been urging?116 Platforms moderate content and sanction users for violating their terms of service on the grounds of rules that are increasingly disclosed to the public. Takedown decisions on an individual level may—at least in Facebook's case—now be appealed. The case could then potentially be escalated to a higher instance, such as the highest appeal body (similar to a higher level of jurisdiction). This notice of appeal would require a full disclosure of the reasons why the initial takedown decision was made so that users can justify why they filed an appeal. The decision reached by the highest appeal body (e.g., “Oversight Board”) would be binding for all content moderators (the Executive), who could eventually be forced to cancel the removal. All in all, the procedure would be like administrative procedures but within a private corporation.

If we consider how both Germany and the EU transfer the power of evaluation and interpretation to platforms on one side, and the way platforms establish mechanisms supposed to insert transparency and due process on the other, two main conclusions can be drawn. First, states and corporations seem to follow the same substantive goals in terms of hate speech and misinformation, even if so for different reasons. Second, they seem to move toward each other, which could end up in eventually meeting halfway. To the first point: just as states have (partly) forbidden hate speech and disinformation by law and through jurisprudence before the social web happened, some are now regulating online speech (directly or indirectly) in order to protect democracy and for reasons of social cohesion. The companies running social media platforms are equally not in favor of contentious content such as hate speech or misinformation, because they do not want to lose customers (which is why they commit to moderating content in the first place).117 Although so-called clickbait content might be beneficial to their business model, contentious content is in the long run not attractive to users. In light of this finding, platforms will perhaps consider building a more sustainable business model.118 Either way, it is now in every actor's (state or private) interest to act against hate speech and misinformation (at least within their own sphere of influence). This might include new structures such as transferring power of interpretation (NetzDG, CPD) or building parallel evaluation structures (individual takedown decisions, appeal procedures). Hence, even if Facebook's proposal is largely perceived as a blame-shifting campaign, it would factually help to clear the company from the blame it earns for the way it moderates content119 and could result in being a turning point in social media governance.

States and Companies Sharing Responsibilities?

To my second point: States and platforms are moving closer to each other in the way they govern online speech, although a stronger trend can be observed in the platforms adopting mechanisms traditionally allocated with state actors. This is different from the idea of shared responsibility in the concept of coregulation, in that it is not precisely based on a regulatory arrangement.120 On the contrary: it might be the lack of regulatory arrangement that pushes private platforms to adopt this type of state-like procedure. Coregulation as a form of cogovernance between platforms and the State relies on a regulation that leaves more than average room for the affected private actor to implement.121 Measures of coregulation provide more guidance and, accordingly, more accountability than self-regulatory measures which goes hand in hand with less autonomy for the parties concerned. The fact that private actors are aiming for more accountability could partly be due to the uncertainty of intermediary liability on a global scale: on the one hand private actors can—within the leeway of private autonomy—design their own contractual relationship with customers, and on the other they ought to comply with specific intermediary regulation in some countries, but not in other. It usually takes time to comply with a new regulation because it first needs to be filled with meaning via interpretation. This uncertainty is maximized when private actors operate in multinational contexts because of the potential compliance obligations, hence it could incentivize them to proactively steer a course of self-regulation.122 The question is whether the situation described earlier as “meeting half-way” is different from what has been observed so far when a new regulatory challenge was identified. The hitherto existing scholarship on legal endogeneity has focused on theorizing how private actors comply with new regulation and how they fill vague terms with their interpretation until courts confirm or change this meaning.123 Although the preambles and explanatory memorandums of new laws give background information and help translate nonspecific terms into corporate policies that align with the initial regulatory goals, there is an interpretation gap that private actors can fill at first. Gilad characterized as “managerialized” a sequence when companies fill the room left by the law with managerial values and goals.124 The “managerial” layer is the subsequent transposition of the regulatory goals into business reality.

However, the present situation is different in that private actors fill their own corporate rules with state-like procedures instead of managerial values and goals. They do not await a regulatory act which would infuse the contractual relationship between users and platforms with principles usually known from constitutional law such as the protection of liberties. Turning toward structures and rules that resemble those of state actors is uncommon because it usually is not at a private actor's advantage. Corporate codes of conduct are, for example, perceived as a substantive form of self-regulation but it is still uncommon for private actors to commit to mechanisms that only state actors endorse because of the obligations that come along.125 The more so as the strict regime of administrative law lacks the flexibility of private law and is subsequently less favorable to those actors it is imposed upon. Therefore, it seems counterintuitive that private actors in a regime of self-regulation would fill the broad room for interpretation (granted on purpose by the legislator) with rules commonly associated with the strict scrutiny imposed upon state actors. At the same time, it suggests higher procedural accountability,126 generally associated with “better rules,” that is, “facilitating adherence to public interest goals and constraining diversion to private interests.”127 The fact that administrative law incorporates the principle of the rule of law and of due process makes it attractive when the values one wishes to convey are trust and transparency.128

When using the term administrative law I specifically do not mean the area of digital constitutionalism.129 Of course, one cannot fully separate administrative law in the context of social media platforms from the broader conversation on Internet governance and global constitutionalism but it is more how non-state actors incorporate mechanisms of administrative law that I wish to highlight here. Other researchers have examined whether large platforms such as Facebook could be treated as a state, comparing their set of rules to a constitution.130 Although this approach acknowledges the power that big tech companies have on users worldwide, assimilating them with or treating them as states might not be the way forward. Nonetheless, due process and fundamental rights should play a bigger part in the process of content moderation and could be equally incorporated by states and companies.131

This article has shown that the rules adopted by the State and the platforms might converge to some extent, as opposed to the platforms and the users. Helberger, Pierson, and Poell thought of convergence of platforms and users as “a cooperative responsibility,” based on the idea that “platforms and users need to agree on the appropriate division of labor with regard to managing responsibility for their role in public space.”132 In opposition to this proposition, the developments explained in the present article show that cooperation is factually happening between the State and the platforms beyond regulatory arrangements because both are governors.133 In my opinion, users are important actors in this matter but they should not be allocated with more responsibility than provided by the law already. Users can be held accountable for their behavior and the way they communicate online, but not for the structural deficiencies of platforms. Because users generate the content that creates the basis of transactions in the attention economy they are sufficiently contributing to that ecosystem. They produce and consume content, spend time on social media platforms either way, which generates more data beneficial to the platforms hosting the content.134 Users in this constellation are consumers and citizens but not cogovernors or coregulators. Nonetheless, the general idea of sharing responsibilities in order to achieve better results is also the driving idea behind this article.

Then again, the shared responsibility between state and non-state actors should not result in an additional threat for freedom of speech. Indeed, cooperation between states and social media platforms has been criticized in the past as enhancing the risk of collateral censorship, especially when operating in an opaque legal framework.135 To prevent any further development in that direction, governments should carefully monitor self-regulation and foster transparency. Furthermore, they should scrutinize whether the measures taken by private actors against hate speech or misinformation are in line with the goals set.

Proactively Avoiding More Governmental Intervention

All in all, the vision of private actors committing to democratic values and fundamental rights as much as possible is appealing but are the responsibilities really shared between states and platforms? Facebook's example shows that the more policies resemble and the more they seem to be inspired by principles and human rights frameworks of representative democracies, the more will they inspire users' confidence and appear as an act of goodwill.136 Making decisions appealable and, more importantly, introducing the concept of separation of power within a private actor, is a tendency toward state-like structures.137 Accepting to be bound to and to implement the decisions reached by an independent institution, just as public authorities are bound by court decisions when both sides are part of the state, is not common to corporations. If this multiplies throughout the industry—at least among the large tech companies that provide intermediary services—it could potentially influence how social media platforms integrate (or plan on integrating) procedural rules, for example, allowing users to file appeals and to request remedy mechanisms in their moderation policies.

Although there could be positive effects of such developments, one needs to bear in mind the inherent risk of whitewashing. When companies create an external body to whose decisions they are subordinate, it can be misused as a lightning rod. There is a stronger tendency of the platforms toward policies that resemble administrative law than vice versa. The two use cases in Germany and the EU show that the State first and foremost delegates to private actors, although self-regulation does not suffice in the context of content moderation.138 More specifically, instruments of soft law are not sufficient if the goal is to force the targeted actor to actually perform. Instead, unenforceable practical guidance will leave them the freedom to pursue their own agenda—for better or worse. Only when regulation stipulates “sticks”—that is, financial disadvantages such as the high fines under NetzDG—will the provisions be implemented. This is neither an obligation to adopt similar regulatory measure, nor is it necessarily the optimal way to go with regards to collateral censorship, but it demonstrates the weakness of soft law (i.e., self-regulation) and raises questions as to the duty of legislators to react more firmly on private actors when democratic principles are at stake. It remains to be seen if the new self-imposed mechanisms are the best way forward for social media platforms and especially for their users.

Footnotes

I would like to thank Johannes M. Bauer for his valuable feedback on the first version of this article, as well as the participants of the JIP preconference workshop, “Taming and Nurturing the Wild Child: Government and Corporate Policies for Social Media,” at the International Communication Association's (ICA) 69th Annual Conference, “Communication Beyond Boundaries” (May 24–28, 2019, Washington, DC).

1.

Cox.

2.

Tucker et al.; Kaye, 7–8.

3.

Benkler, Faris, and Roberts, 362.

4.

Definition by Lexico, retrieved from https://www.lexico.com/en/definition/regulation, accessed September 10, 2019.

5.

The terms social media platforms and platforms will be used as synonyms hereinafter, defined as general-purpose platforms for social communication and information sharing (as in Helberger, Pierson, and Poell, 1) and intermediaries as the generic term for Internet access providers and hosts.

6.

Definition from Merriam Webster, retrieved from https://www.merriam-webster.com/dictionary/hate%20speech, accessed April 22, 2019.

7.

Waldron, 9–10.

8.

Ibid., 13.

9.

Official translation, retrieved from https://www.gesetze-im-internet.de/englisch_gg/englisch_gg.html#p0037, accessed April 22, 2019.

10.

It also needs to be constitutional in a more general sense, but these are specific conditions to Art. 5 Basic Law.

11.

Official translation, retrieved from https://www.gesetze-im-internet.de/englisch_stgb/englisch_stgb.html#p1246, accessed April 22, 2019.

12.

Kinstler, “Can Germany Fix Facebook?”

13.

Klonick, 1621.

14.

Content moderation refers to the way social media platforms handle user-generated content.

15.

Roberts, 2; Gillespie, 8–9.

16.

Faus and Storks, 14.

17.

Which is also related to the liability exemption under the EU E-Commerce directive: Kuczerawy, “Intermediary Liability & Freedom of Expression,” 48–49.

18.

Constine.

19.

Kinstler, “Can Germany Fix Facebook?”; Palfrey, 990.

20.

I use takedown as a generic term for the removal of content, whether it gets deleted or blocked. In some contexts, this distinction is relevant, for example, in the context of the NetzDG, companies delete content that violates their community guidelines and geoblock content that is a contravention of German laws. However, this distinction does not add to the argument of this article which is why I will not elaborate further here.

21.

Speech delivered on June 30, 2017, retrieved from https://www.bmjv.de/SharedDocs/Reden/DE/2017/06302017_BT_NetzDG.html, accessed April 22, 2019.

22.

Heldt, “Reading between the Lines and the Numbers.”

23.

Kinstler, “Germany's Attempt to Fix Facebook.”

24.

Kaye, 7; Richter, “Das NetzDG—Wunderwaffe gegen „Hate Speech“ und „Fake News“.”

25.

Cf. Schulz.

26.

Guggenberger, 2582.

27.

Schmitz and Berndt, 7; Wischmeyer, 15–16; Buermeyer.

28.

Echikson and Knodt, 11.

29.

Heldt, “Reading between the Lines and the Numbers.”

30.

Wischmeyer, 20.

31.

Wimmers and Heymann, 100.

32.

Nunziato, 392; Belli, Francisco, and Zingales, 52; Citron, “What to Do About the Emerging Threat,” 3; Kaye, 10.

33.

Nolte, 556–58; Liesching, 27; Wischmeyer, 15–16.

34.

Citron, “What to Do About the Emerging Threat,” 5.

35.

Palfrey, 991–92.

36.

Kaye, 12.

37.

Waldman, 849.

38.

Bradshaw and Howard, 10–11.

39.

Definition from Merriam-Webster, retrieved from https://www.merriam-webster.com/dictionary/disinformation, accessed April 30, 2019.

40.

Bakir and McStay; Benkler, Faris, and Roberts; Lazer et al.; Timberg and Romm.

41.

Vogelstein and Thompson.

42.

Bradshaw and Howard, 16.

43.

Cadwalladr and Graham-Harrison; Vogelstein and Thompson.

44.

Cadwalladr and Graham-Harrison.

45.

Bakir and McStay, 165.

46.

Bradshaw and Howard, 11.

47.

Barel.

48.

Belli; Iglesias Keller.

49.

Benkler, Faris, and Roberts, 384.

50.

Bradshaw and Howard, 16–17; Helberger, Pierson, and Poell, 7.

51.

Waldman, 851.

52.

Howard et al., 39.

53.

Investigate Europe Team; Schmidt, Schumann, and Simantke.

54.

Hölig.

55.

LOI n° 2018-1202 du 22 décembre 2018 relative à la lutte contre la manipulation de l'information, du 23 décembre 2018, retrieved from https://www.legifrance.gouv.fr/affichTexte.do;jsessionid=97BB2F082E6CF33B3399966E8A6CE9BD.tplgfr31s_3?cidTexte=JORFTEXT000037847559&categorieLien=id, accessed April 24, 2019.

56.

EU Commission.

57.

Similar to the Honest Ads Act, a bill proposed to the U.S. Congress in 2017 and still in progress, retrieved from https://www.congress.gov/bill/115th-congress/senate-bill/1989, accessed May 1, 2019.

58.

See infra.

59.

Helberger, Pierson, and Poell, 8.

60.

Latzer, Just, and Saurwein, 376.

61.

Cini.

62.

Wikipedia, retrieved from https://en.wikipedia.org/wiki/Code_of_practice, accessed April 25, 2019.

63.

Bradshaw and Howard, 16.

64.

Andreas Voßkuhle, Anna-Bettina Kaiser, ‘Der Grundrechtseingriff’ [2009], Juristische Schulung, 313; Julian Staben, Markus Oermann, (2013)‘Mittelbare Grundrechtsreingriffe durch Abschreckung?—Zur grundrechtlichen Bewertung polizeilicher „Online-Streifen“ und „Online-Ermittlungen“ in sozialen Netzwerken’, Der Staat, 630, 637.

65.

Latzer, Just, and Saurwein, 375, about the potential disadvantages of self-regulation as compared to state regulation.

66.

Birnhack and Elkin-Koren, 14.

67.

See also Kuczerawy, “The Power of Positive Thinking,” 235.

68.

Definition from Merriam-Webster, retrieved from https://www.merriam-webster.com/dictionary/power, accessed April 26, 2019.

69.

Keller, 8.

70.

Retrieved from https://www.echr.coe.int/Documents/Convention_ENG.pdf, accessed April 27, 2019.

71.

Cornils, para. 52.

72.

Jackson, 136.

73.

Klonick, 28.

74.

Citron, “What to Do About the Emerging Threat,” 2–3; Langvardt, 1358–63.

75.

Constine.

76.

For example, Facebook's (non-) compliance of the NetzDG obligation to implement a user-friendly and accessible complaint procedure: Heldt, “Reading between the Lines and the Numbers,” 11. In July 2019, the German Federal Office of Justice fined Facebook for underreporting complaints that would fall in the NetzDG's scope of application, see among many Deutsche Welle retrieved from https://www.dw.com/en/germany-fines-facebook-for-underreporting-hate-speech-complaints/a-49447820, accessed September 10, 2019.

77.

Klonick, 1649–50.

78.

Benkler, Faris, and Roberts, 22.

79.

For more on the platforms' private ordering, see Belli and Venturini, 4.

80.

Castells, 410–28; Balkin; Keller.

81.

Namely the “mittelbare Drittwirkung der Grundrechte” in Germany and the “théorie des obligations positives” in the ECtHR's jurisprudence.

82.

For example, section 230 Communications Decency Act in the United States and Art. 14 E-Commerce Directive in the European Union.

83.

DeNardis and Hackl, 766.

84.

Roberts, 2–3.

85.

Constine.

86.

Koebler and Cox.

87.

Ibid; Heldt, “Reading between the Lines and the Numbers,” 8.

88.

Farkas and Neumayer.

89.

Timberg and Dwoskin.

90.

Facebook Newsroom, December 6, 2018, retrieved from https://newsroom.fb.com/news/2018/12/inside-feed-coordinated-inauthentic-behavior/, accessed April 29, 2019.

91.

“A test to limit forwarding that will apply to everyone using WhatsApp,” launched on July 19, 2018 and last updated on January 21, 2019, retrieved from https://blog.whatsapp.com/10000647/Weitere-%C3%84nderungen-an-der-Weiterleitungsfunktion?lang=en, accessed July 21, 2019.

92.

Facebook Newsroom, “Remove, Reduce, Inform: New Steps to Manage Problematic Content,” April 10, 2019, retrieved from https://newsroom.fb.com/news/2019/04/remove-reduce-inform-new-steps/, accessed July 2, 2019.

93.

Lai and Hui, 260; Smith.

94.

Facebook Newsroom, April 24, 2018, retrieved from https://newsroom.fb.com/news/2018/04/comprehensive-community-standards/, accessed April 29, 2019.

95.

Latzer, Just, and Saurwein, 375.

96.

Malhotra, Monin, and Tomz, 19–20.

97.

“Why big tech should fear Europe,” The Economist, March 23, 2019, retrieved from https://www.economist.com/leaders/2019/03/23/why-big-tech-should-fear-europe, last accessed July 2, 2019.

98.

Douek.

99.

Crawford and Gillespie.

100.

Kessler.

101.

Facebook's Community Standards, retrieved from https://www.facebook.com/communitystandards/, accessed April 30, 2019.

102.

Facebook's Policy Rationale for hate speech, retrieved from https://www.facebook.com/communitystandards/hate_speech, and Facebook's Hard Questions on hate speech, retrieved from https://newsroom.fb.com/news/2017/06/hard-questions-hate-speech/, accessed April 29, 2019.

103.

Facebook's Policy for False Information, retrieved from https://www.facebook.com/communitystandards/false_news, accessed April 30, 2019.

104.

Zuckerberg.

105.

Zuckerberg referred to the oversight board as a “Supreme Court of Facebook.”

106.

Facebook.

107.

Harris.

108.

“Global Feedback & Input on the Facebook Oversight Board,” 24–25.

109.

So far, Zuckerberg announced “We're also working to provide more transparency into how policies were either violated or not.” See Zuckerberg.

110.

FB's Oversight Board charter June 27, 2019?

111.

“Global Feedback & Input on the Facebook Oversight Board,” 17, 21.

112.

Ibid., 22–23.

113.

Facebook, 3; Ibid., 27.

114.

See, for example, sec. 30 German Stock Corporation Act and Art. 51 Company Law of the People's Republic of China.

115.

Shapiro, 1–64; Grossman.

116.

Pasquale, 512–13.

117.

Klonick, 1664.

118.

Helberger, Pierson, and Poell, 8; Burt.

119.

Klonick and Kadri; Douek.

120.

Latzer, Just, and Saurwein, 377.

121.

Gorwa, 864.

122.

Grajzl and Murrell, 523; Léonard et al., 174.

123.

Edelman, Uggen, and Erlanger.

124.

Gilad, 136.

125.

Jenkins, 8.

126.

Ogus, 643.

127.

Ibid.

128.

Haufler, 85; Walker, 150.

129.

Gill, Redeker, and Gasser; Padovani and Santaniello; Suzor.

130.

Celeste, 3–4.

131.

Suzor, 4; Kaye, 14–15.

132.

Helberger, Pierson, and Poell, 2.

133.

Klonick, 1603.

134.

Zuboff, 199–200; Burt.

135.

Citron, “Extremist Speech”; Birnhack and Elkin-Koren; Elkin-Koren and Haber; Heldt, “Upload-Filters: Bypassing.”

136.

Haufler, 85.

137.

Teubner, 20.

138.

Nurik, 2893; Lievens, 87.

Bibliography

Bakir, Vian, and Andrew McStay. “Fake News and the Economy of Emotions: Problems, Causes, Solutions.” Digital Journalism 6, no. 2 (February 7, 2018): 154–75. doi:10.1080/21670811.2017.1345645.
Balkin, Jack M. “Free Speech Is a Triangle.” Columbia Law Review 118, no. 7 (May 28, 2018): 2011–56.
Barel, Ofir. “Why Are Israeli Elections Extremely Sensitive to Fake News?” Council on Foreign Relations (blog), April 9, 2019. Accessed October 31, 2019. https://www.cfr.org/blog/why-are-israeli-elections-extremely-sensitive-fake-news.
Belli, Luca. “WhatsApp Skewed Brazilian Election, Proving Social Media's Danger to Democracy by Luca Belli.” Utica College Center of Public Affairs and Election Research (blog), December 9, 2018. Accessed October 31, 2019. https://www.ucpublicaffairs.com/home/2018/12/9/gtricezvp8cmlm7ppdqm7amga1qjoo.
Belli, Luca, Pedro Augusto P. Francisco, and Nicolo Zingales. “Law of the Land or Law of the Platform? Beware of the Privatisation of Regulation and Police.” October 31, 2017. Accessed October 31, 2019. http://bibliotecadigital.fgv.br/dspace/handle/10438/19922.
Belli, Luca, and Jamila Venturini. “Private Ordering and the Rise of Terms of Service as Cyber-Regulation.” Internet Policy Review 5, no. 4 (December 29, 2016). doi:10.14763/2016.4.441.
Benkler, Yochai, Rob Faris, and Hal Roberts. Network Propaganda: Manipulation, Disinformation, and Radicalization in American Politics. New York: Oxford University Press, 2018.
Birnhack, Michael D., and Niva Elkin-Koren. “The Invisible Handshake: The Reemergence of the State in the Digital Environment.” Virginia Journal of Law and Technology 8, no. 6 (2003): 1–56.
Bradshaw, Samantha, and Philip N. Howard. “Why Does Junk News Spread So Quickly Across Social Media? Algorithms, Advertising and Exposure in Public Life.” Working paper. Knight Foundation Working Paper. Oxford, UK: Oxford Internet Institute, January 2018.
Buermeyer, Ulf. “NetzDG: Facebook-Justiz statt wirksamer Strafverfolgung?” Legal Tribune Online, March 24, 2017. Accessed October 31, 2019. https://www.lto.de/recht/hintergruende/h/netzwerkdurchsetzungsgesetz-netzdg-facebook-strafverfolgung-hate-speech-fake-news/3/.
Burt, Andrew. “Can Facebook Ever Be Fixed?” Harvard Business Review, April 8, 2019. Accessed October 31, 2019. https://hbr.org/2019/04/can-facebook-ever-be-fixed.
Cadwalladr, Carole, and Emma Graham-Harrison. “Revealed: 50 Million Facebook Profiles Harvested for Cambridge Analytica in Major Data Breach.” The Guardian, March 17, 2018, sec. News. Accessed October 31, 2019. https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election.
Castells, Manuel. “The New Public Sphere: Global Civil Society, Communication Networks, and Global Governance.” The Annals of the American Academy of Political and Social Science 616, no. 1 (March 2008): 78–93. doi:10.1177/0002716207311877.
Celeste, Edoardo. “Terms of Service and Bills of Rights: New Mechanisms of Constitutionalisation in the Social Media Environment?” International Review of Law, Computers & Technology, May 21, 2018. Accessed October 31, 2019. https://www.tandfonline.com/doi/abs/10.1080/13600869.2018.1475898.
Cini, Michelle. “The Soft Law Approach: Commission Rule-Making in the EU's State Aid Regime.” Journal of European Public Policy 8, no. 2 (2001): 192–207.
Citron, Danielle Keats. “Extremist Speech, Compelled Conformity, and Censorship Creep.” Notre Dame Law Review 93, no. 3 (2017a): 1035.
Citron, Danielle Keats. “What to Do About the Emerging Threat of Censorship Creep on the Internet.” Policy Analysis. CATO Institute, November 28, 2017b.
Constine, Josh. “Facebook Reveals 25 Pages of Takedown Rules for Hate Speech and More.” TechCrunch (blog), April 24, 2018. Accessed October 31, 2019. http://social.techcrunch.com/2018/04/24/facebook-content-rules/.
Cornils, Matthias. “EMRK Art. 10 Freiheit Der Meinungsäußerung.” In BeckOK Informations-Und Medienrecht, edited by Hubertus Gersdorf and Boris P. Paal. München: C.H. Beck, January 5, 2016.
Cox, Joseph. “Documents Show How Facebook Moderates Terrorism on Livestreams.” Vice (blog), March 15, 2019. Accessed October 31, 2019. https://www.vice.com/en_us/article/eve7w7/documents-show-how-facebook-moderates-terrorism-on-livestreams.
Crawford, Kate, and Tarleton Gillespie. “What Is a Flag for? Social Media Reporting Tools and the Vocabulary of Complaint.” New Media & Society 18, no. 3 (March 1, 2016): 410–28. doi:10.1177/1461444814543163.
DeNardis, L., and A. M. Hackl. “Internet Governance by Social Media Platforms.” Telecom munications Policy (SPECIAL ISSUE ON THE GOVERNANCE OF SOCIAL MEDIA), 39, no. 9 (October 1, 2015): 761–70. doi:10.1016/j.telpol.2015.04.003.
Douek, Evelyn. “Facebook's New ‘Supreme Court’ Could Revolutionize Online Speech.” Lawfare (blog), November 19, 2018. Accessed October 31, 2019. https://www.lawfareblog.com/facebooks-new-supreme-court-could-revolutionize-online-speech.
Echikson, William, and Olivia Knodt. “Germany's NetzDG: A Key Test for Combatting Online Hate.” Research report. Thinking Ahead of Europe. Brussels: CEPS, November 2018. Accessed October 31, 2019. https://www.ceps.eu/ceps-publications/germanys-netzdg-key-test-combatting-online-hate.
Edelman, Lauren B., Christopher Uggen, and Howard S. Erlanger. “The Endogeneity of Legal Regulation: Grievance Procedures as Rational Myth.” American Journal of Sociology 105, no. 2 (1999): 406–54.
Elkin-Koren, Niva, and Eldar Haber. “Governance by Proxy: Cyber Challenges to Civil Liberties.” Brooklyn Law Review 82 (2017, 2016): 105–62.
EU Commission. “Code of Practice on Disinformation,” September 26, 2018. Accessed October 31, 2019. https://ec.europa.eu/digital-single-market/en/news/code-practice-disinformation.
Facebook. “Draft Charter: An Oversight Board for Content Decisions.” Facebook, January 28, 2019. Accessed October 31, 2019. https://fbnewsroomus.files.wordpress.com/2019/01/draft-charter-oversight-board-for-content-decisions-2.pdf.
Farkas, Johan, and Christina Neumayer. “‘Stop Fake Hate Profiles on Facebook’: Challenges for Crowdsourced Activism on Social Media.” First Monday 22, no. 9 (September 1, 2017). doi:10.5210/fm.v22i9.8042.
Faus, Rainer, and Simon Storks. “Das pragmatische Einwanderungsland: Was die Deutschen über Migration denken.” Bonn, Germany: Friedrich-Ebert-Stiftung, April 2019. Accessed October 31, 2019. http://library.fes.de/pdf-files/fes/15213-20190402.pdf.
Gilad, Sharon. “Beyond Endogeneity: How Firms and Regulators Co-Construct the Meaning of Regulation: Beyond Endogeneity.” Law & Policy 36, no. 2 (April 2014): 134–64. doi:10.1111/lapo.12017.
Gill, Lex, Dennis Redeker, and Urs Gasser. “Towards Digital Constitutionalism? Mapping Attempts to Craft an Internet Bill of Rights.” Berkmann Center Research Publication, November 9, 2015. Accessed October 31, 2019. https://cyber.harvard.edu/node/99209.
Gillespie, Tarleton. “Governance of and by Platforms.” In Sage Handbook of Social Media, edited by Jean Burgess, Thomas Poell, and Alice Marwick, 254–78. London: Sage, 2017.
Gorwa, Robert. “What Is Platform Governance?” Information, Communication & Society 22, no. 6 (May 12, 2019): 854–71. doi:10.1080/1369118X.2019.1573914.
Grajzl, Peter, and Peter Murrell. “Allocating Lawmaking Powers: Self-Regulation vs Government Regulation.” Journal of Comparative Economics 35, no. 3 (September 2007): 520–45. doi:10.1016/j.jce.2007.01.001.
Grossman, Joel B. “Judicial Legitimacy and the Role of Courts: Shapiro's Courts Review Essay.” American Bar Foundation Research Journal 1984 (1984): 214–22.
Guggenberger, Nikolas. “Das Netzwerkdurchsetzungsgesetz in der Anwendung.” Neue Juristische Wochenschrift, no. 36 (2017): 2577–82.
Harris, Brent. “Getting Input on an Oversight Board.” Facebook Newsroom (blog). April 1, 2019. Accessed April 30, 2019. https://newsroom.fb.com/news/2019/04/input-on-an-oversight-board/.
Haufler, Virginia. A Public Role for the Private Sector: Industry Self-Regulation in a Global Economy. Washington, DC: Carnegie Endowment, 2013.
Helberger, Natali, Jo Pierson, and Thomas Poell. “Governing Online Platforms: From Contested to Cooperative Responsibility.” The Information Society 34, no. 1 (January 2018): 1–14. doi:10.1080/01972243.2017.1391913.
Heldt, Amélie Pia. “Reading between the Lines and the Numbers: An Analysis of the First NetzDG Reports.” Internet Policy Review 8, no. 2 (June 12, 2019a). Accessed October 31, 2019. https://policyreview.info/articles/analysis/reading-between-lines-and-numbers-analysis-first-netzdg-reports.
Heldt, Amélie Pia. “Upload-Filters: Bypassing Classical Concepts of Censorship?” JIPITEC 10, no. 1 (May 5, 2019b). https://www.jipitec.eu/issues/jipitec-10-1-2019/4877.
Hölig, Sascha. “Meinung: Fake News werden die Wahlen in Deutschland nicht entscheiden.” Bundeszentrale für politische Bildung (blog), April 25, 2017. Accessed October 31, 2019. https://www.bpb.de/dialog/netzdebatte/247147/meinung-fake-news-werden-die-wahlen-in-deutschland-nicht-entscheiden.
Howard, Philip N., Bharath Ganesh, Dimitra Liotsiou, John Kelly, and Camille François. “The IRA, Social Media and Political Polarization in the United States, 2012-2018.” Computational Propaganda Research Project. Oxford, UK: University of Oxford, n.d. Accessed October 31, 2019. https://comprop.oii.ox.ac.uk/wp-content/uploads/sites/93/2018/12/The-IRA-Social-Media-and-Political-Polarization.pdf.
Iglesias Keller, Clara. “Could Fake News Annul the Brazilian Elections?” HIIG (blog), October 4, 2018. Accessed October 31, 2019. https://www.hiig.de/en/could-fake-news-annul-the-brazilian-elections/.
Investigate Europe Team. “The Disinformation Machine.” Investigate Europe (blog), April 14, 2019. Accessed October 31, 2019. https://www.investigate-europe.eu/publications/disinformation-machine/.
Jackson, Benjamin. “Censorship and Freedom of Expression in the Age of Facebook.” New Mexico Law Review 44, no. 1 (July 1, 2014): 121.
Jenkins, Rhys. “Corporate Codes of Conduct: Self-Regulation in a Global Economy.” United Nations Research Institute for Social Development, Technology, Business and Society, 2001, 48.
Kaye, David. “Report of the Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression.” Human Rights Council. United Nations General Assembly, June 4, 2018. Accessed October 31, 2019. https://www.ohchr.org/EN/Issues/FreedomOpinion/Pages/OpinionIndex.aspx.
Keller, Daphne. “Who Do You Sue? State and Platform Hybrid Power Over Online Speech.” Hoover Institution, Aegis Series, no. Nr. 1902 (January 30, 2019), 1–40.
Kessler, Sarah. “Donald Trump Can Post Hate Speech to Facebook, But You Can't.” Fast Company (blog), December 11, 2015. Accessed October 31, 2019. https://www.fastcompany.com/3054592/donald-trump-can-post-hate-speech-to-facebook-but-you-cant.
Kinstler, Linda. “Can Germany Fix Facebook?” The Atlantic, November 2, 2017. Accessed October 31, 2019. https://www.theatlantic.com/international/archive/2017/11/germany-facebook/543258/.
Kinstler, Linda. “Germany's Attempt to Fix Facebook Is Backfiring.” The Atlantic, May 18, 2018. https://www.theatlantic.com/international/archive/2018/05/germany-facebook-afd/560435/.
Klonick, Kate. “The New Governors: The People, Rules, and Processes Governing Online Speech.” Harvard Law Review 131, no. 6 (2018): 1598–1669.
Klonick, Kate, and Thomas Kadri. “How to Make Facebook's ‘Supreme Court’ Work.” The New York Times, November 18, 2018, sec. Opinion. Accessed October 31, 2019. https://www.nytimes.com/2018/11/17/opinion/facebook-supreme-court-speech.html.
Koebler, Jason, and Joseph Cox. “Here's How Facebook Is Trying to Moderate Its Two Billion Users.” Motherboard (blog), August 23, 2018. Accessed October 31, 2019. https://motherboard.vice.com/en_us/article/xwk9zd/how-facebook-content-moderation-works.
Kuczerawy, Aleksandra. “Intermediary Liability & Freedom of Expression: Recent Developments in the EU Notice & Action Initiative.” Computer Law & Security Review 31, no. 1 (February 1, 2015): 46–56. doi:10.1016/j.clsr.2014.11.004.
Kuczerawy, Aleksandra. “The Power of Positive Thinking: Intermediary Liability and the Effective Enjoyment of the Right to Freedom of Expression.” JIPITEC 8, no. 3 (2017): 226–38.
Lai, Yee-Lin, and Kai-Lung Hui. “Internet Opt-in and Opt-out: Investigating the Roles of Frames, Defaults and Privacy Concerns.” In Proceedings of the 2006 ACM SIGMIS CPR Conference on Computer Personnel Research Forty Four Years of Computer Personnel Research: Achievements, Challenges & the Future—SIGMIS CPR '06, 253. Claremont, California: ACM Press, 2006. doi:10.1145/1125170.1125230.
Langvardt, Kyle. “Regulating Online Content Moderation.” Georgetown Law Journal 106, no. 5 (2018): 1353–88. doi:10.2139/ssrn.3024739.
Latzer, Michael, Natascha Just, and Florian Saurwein. “Self- and Co-Regulation: Evidence, Legitimacy and Governance Choice.” In Routledge Handbook of Media Law, edited by Monroe Edwin Price, Stefaan Verhulst, and Libby Morgan, 373–97. Routledge Handbooks. London: Routledge, 2013.
Lazer, David M. J., Matthew A. Baum, Yochai Benkler, Adam J. Berinsky, Kelly M. Greenhill, Filippo Menczer, Miriam J. Metzger, Brendan Nyhan, Gordon Pennycook, David Rothschild, Michael Schudson, Steven A. Sloman, Cass R. Sunstein, Emily A. Thorson, Duncan J. Watts, and Jonathan L. Zittrain. “The Science of Fake News.” Science 359, no. 6380 (March 9, 2018): 1094–96. doi:10.1126/science.aao2998.
Léonard, Evelyne, Valeria Pulignano, Ryan Lamare, and Tony Edwards. “Multinational Corporations as Political Players.” Transfer: European Review of Labour and Research 20, no. 2 (May 1, 2014): 171–82. doi:10.1177/1024258914525559.
Liesching, Marc. “Die Durchsetzung von Verfassungs-Und Europarecht Gegen Das NetzDG.” Multimedia Und Recht, no. 01 (2018): 26–30.
Lievens, Eva. “Is Self-Regulation Failing Children and Young People? Assessing the Use of Alternative Regulatory Instruments in the Area of Social Networks.” In European Media Policy for the Twenty-First Century: Assessing the Past, Setting Agendas for the Future, edited by Seamus Simpson, Manuel Puppis, and Hilde van den Bulck, 77–94. Routledge Advances in Internationalizing Media Studies 17. New York, London: Routledge, 2016.
Malhotra, Neil, Benoît Monin, and Michael Tomz. “Does Private Regulation Preempt Public Regulation?” American Political Science Review 113, no. 1 (February 2019): 19–37. doi:10.1017/S0003055418000679.
Mentel Darmé, Zoe, Matt Miller, and Kevin Steeves. “Global Feedback & Input on the Facebook Oversight Board for Content Decisions.” Oversight Board Consultation Report. Facebook, June 27, 2019. Accessed October 31, 2019. https://fbnewsroomus.files.wordpress.com/2019/06/oversight-board-consultation-report-1.pdf.
Nolte, Georg. “Hate-Speech, Fake-News, Das ‘Netzwerkdurchsetzungsgesetz’ und Vielfaltsicherung Durch Suchmaschinen.” Zeitschrift Für Urheber-Und Medienrecht, no. 7 (2017): 552–65.
Nunziato, Dawn C. “The Beginning of the End of Internet Freedom.” Georgetown Journal of International Law 45 (2014): 383–410.
Nurik, Chloe. “‘Men Are Scum’: Self-Regulation, Hate Speech, and Gender-Based Censorship on Facebook.” International Journal of Communication 13 (June 30, 2019): 21.
Ogus, A. “Regulatory Institutions and Structures.” Annals of Public and Cooperative Economics 73, no. 4 (2002): 627–48. doi:10.1111/1467-8292.00208.
Padovani, Claudia, and Mauro Santaniello. “Digital Constitutionalism: Fundamental Rights and Power Limitation in the Internet Eco-System.” International Communication Gazette 80, no. 4 (June 2018): 295–301. doi:10.1177/1748048518757114.
Palfrey, John. “Four Phases of Internet Regulation.” Social Research: An International Quarterly 77, no. 3 (2010): 981–96.
Pasquale, Frank. “Platform Neutrality: Enhancing Freedom of Expression in Spheres of Private Power.” Theoretical Inquiries in Law 17, no. 2 (January 1, 2016): 487–513. doi:10.1515/til-2016-0018.
Richter, Philipp. “Das NetzDG—Wunderwaffe Gegen ‘Hate Speech’ Und ‘Fake News’ Oder Ein Neues Zensurmittel?” ZD-Aktuell, no. 9 (2017): 05623.
Roberts, Sarah T. “Commercial Content Moderation: Digital Laborers' Dirty Work.” FIMS Western University Media Studies Publications, 2016, 1–12.
Schmidt, Nico, Harald Schumann, and Elisa Simantke. “Wie gefährlich ist rechte Desinformation im Netz?” Tagesspiegel, April 14, 2019.
Schmitz, Sandra, and Christian M. Berndt. “The German Act on Improving Law Enforcement on Social Networks (NetzDG): A Blunt Sword?” December 14, 2018. Accessed October 31, 2019. https://papers.ssrn.com/abstract=3306964.
Schulz, Wolfgang. “Regulating Intermediaries to Protect Privacy Online—the Case of the German NetzDG.” In HIIG Discussion Paper Series, 15, n.d. https://www.hiig.de/wp-content/uploads/2018/07/SSRN-id3216572.pdf.
Shapiro, Martin M. Courts, a Comparative and Political Analysis. Chicago: University of Chicago Press, 1981.
Smith, Aaron. “Half of Online Americans Don't Know What a Privacy Policy Is.” Pew Research Center (blog), December 4, 2014. https://www.pewresearch.org/fact-tank/2014/12/04/half-of-americans-dont-know-what-a-privacy-policy-is/.
Suzor, Nicolas. “Digital Constitutionalism: Using the Rule of Law to Evaluate the Legitimacy of Governance by Platforms.” Social Media + Society 4, no. 3 (July 2018): 205630511878781. doi:10.1177/2056305118787812.
Teubner, Gunther. “Societal Constitutionalism: Alternatives to State-Centred Constitutional Theory?” In Transnational Governance and Constitutionalism, edited by Christian Joerges, Inger-Johanne Sand, and Gunther Teubner, 3–28. International Studies in the Theory of Private Law. Oxford: Hart, 2004.
Timberg, Craig, and Elizabeth Dwoskin. “Twitter Is Sweeping Out Fake Accounts Like Never Before, Putting User Growth at Risk.” Washington Post, June 7, 2018. Accessed October 31, 2019. https://www.washingtonpost.com/technology/2018/07/06/twitter-is-sweeping-out-fake-accounts-like-never-before-putting-user-growth-risk/.
Timberg, Craig, and Tony Romm. “New Report on Russian Disinformation, Prepared for the Senate, Shows the Operation's Scale and Sweep.” The Washington Post, December 17, 2018. Accessed October 31, 2019. https://mdn.ssrc.org/2018/12/17/new-report-on-russian-disinformation-prepared-for-the-senate-shows-the-operations-scale-and-sweep-the-washington-post/.
Tucker, Joshua A., Yannis Theocharis, Margaret E. Roberts, and Pablo Barberá. “From Liberation to Turmoil: Social Media and Democracy.” Journal of Democracy 28, no. 4 (October 7, 2017): 46–59. doi:10.1353/jod.2017.0064.
Vogelstein, Fred, and Nicholas Thompson. “15 Months of Fresh Hell Inside Facebook.” Wired, April 16, 2019. Accessed October 31, 2019. https://www.wired.com/story/facebook-mark-zuckerberg-15-months-of-fresh-hell/.
Waldman, Ari Ezra. “The Marketplace of Fake News.” University of Pennsylvania Journal of Constitutional Law 20 (2017): 845–70.
Waldron, Jeremy. The Harm in Hate Speech. Cambridge, MA: Harvard University Press, 2012.
Walker, Kristen L. “Surrendering Information through the Looking Glass: Transparency, Trust, and Protection.” Journal of Public Policy & Marketing 35, no. 1 (April 2016): 144–58. doi:10.1509/jppm.15.020.
Wimmers, Jörg, and Britta Heymann. “Zum Referentenentwurf Eines Netzwerkdurchsetzu ngsgesetzes (NetzDG)—Eine Kritische Stellungnahme.” AfP - Zeitschrift Für Medien-Und Kommunikationsrecht 48, no. 2 (January 1, 2017): 93–102. doi:10.9785/afp-2017-0202.
Wischmeyer, Thomas. “‘What Is Illegal Offline Is Also Illegal Online’—The German Network Enforcement Act 2017.” In Fundamental Rights Protection Online: The Future Regulation of Intermediaries, edited by Bilyana Petkova and Tuomas Ojanen. Cheltenham: Edward Elgar, 2019.
Zuboff, Shoshana. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. 1st ed. New York: PublicAffairs, 2019.
Zuckerberg, Mark. “A Blueprint for Content Governance and Enforcement.” Facebook (blog), November 15, 2018. Accessed October 31, 2019. https://www.facebook.com/notes/mark-zuckerberg/a-blueprint-for-content-governance-and-enforcement/10156443129621634/.
This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.