Abstract
Which legal instrument can effectively address current challenges in social media governance and how do companies take their share, shifting away from opaque enforcement of terms of services and increasingly copying governmental structures? In a first step, this article describes and analyzes the way that states address hate speech and misinformation in their respective regulatory projects. Secondly, it examines how social media platforms sanction unwanted content and integrate (or plan on integrating) procedural rules such as appeal and due process principles in their moderation policies. Large social media platforms tend to adopt new structures that resemble administrative law—an uncommon development for non-state actors.
“Toxic Twitter,” “Junk News,” “Zero Tolerance,” “Biggest problem for democracy,” “Fresh Hell”—the headlines on the big social media platforms and the way they handle (or not) hate speech and misinformation have been negative for a considerable amount of time. This especially increased since their possible influence on democratic elections was unveiled (e.g., by the Cambridge Analytica scandal) and more disturbing content made its way into users' newsfeeds, such as the live streaming of the Christchurch shooting in New Zealand.1 After a period of time during which states were trying to push tech companies to find remedies, but essentially both sides were rather passive, things have changed over the past two years. Through different types of regulation on the one side and through more private ordering on the other side, all actors have noticeably become more active. Essentially, state and non-state actors are more eager to enforce countermeasures than before. In this article, I will show that, although a higher number of removals is not desirable per se, the synergies that happen as a result of the policies outlined in the following are beneficial to the normative goals pursued.
States address different concerns regarding online communication through different types of regulations, which are generally subject to criticism due to their possible effects on freedom of expression and information.2 In his work about the contentious role of governments, Benkler described the German regulation as follows: “The most aggressive effort in a liberal democracy to respond to disinformation and hate speech on social media by regulating social media platforms is the German NetzDG law that became effective on January 1, 2018.”3 But is the regulation of social media platforms really “aggressive,” more “aggressive” than private ordering, and does it have negative effects on democracy? In general, regulation is defined as a “rule or directive made and maintained by an authority,”4 hence subject to constitutional proviso and bound to the principles of necessity and proportionality. If an individual freedom enshrined in the constitution is formulated in an affirmative manner (e.g., “Every person shall have the right …”), the legislator might be obliged to adopt laws protecting it—as opposed to a coercive formulation (e.g., “shall pass no law”) that protects each individual from legislative activities in that matter but not from legislative passivity. In general, regulatory interventions in speech matters are viewed critically in the United States (which has led to a broad comparative scholarship on the United States and the German approaches with regards to hate speech). In the first section, the article does not aim at comparing laws in a formal way, but rather at examining two distinct regulatory approaches as attempts to tackle current issues of the digital public sphere, namely the phenomena of hate speech and of misinformation. The goal here is to examine less the laws in substance but rather the mechanisms of regulatory instruments adopted in recent years, namely, the German Network Enforcement Act (so-called and hereinafter NetzDG) and the EU Code of Practice on Misinformation.
Hate speech and misinformation are often named together when discussing the challenges of online communication although they are two very distinct issues, inter alia in the form of harm they cause and the subjects of protection they target. They might, however, have in common that they generate user engagement and have therefor a similar ambiguous role in the attention economy. In a second step, this article provides an overview of the measures endorsed by social media platforms5 and shows that they are increasingly similar to the instruments used by the state and to principles enshrined in administrative law. These developments seem to me as moving toward each other. While the state is potentially delegating power to platforms, the platforms sound less and less like private companies but rather like administrative authorities. Additionally, the reasons for the increasing similarities are double-tracked: states have realized they need to take action given how urgent the problems are and companies feel the imminent threat of regulations, which makes them more susceptible to worry about the alignment of their policies with lawmakers' wishes.
The Regulatory Approach to Current Challenges
Hate Speech and Germany
The Problem: Hate Speech
Hate speech is generally defined as “speech expressing hatred of a particular group of people.”6 In countries where laws prohibit hate speech, different definitions can be found in their respective penal codes. A broader definition of hate speech in the legal context is “words which are deliberately abusive and/or insulting and/or threatening and/or demeaning directed at members of vulnerable minorities, calculated to stir up hatred against them.”7 Forbidding this type of speech is not a German peculiarity due to its history, but rather common in many western democracies where restricting fundamental rights in a proportionate manner is legitimate.8 In Germany, freedom of speech is formulated in an affirmative way (“Every person shall have the right freely to express and disseminate his opinions in speech, writing and pictures (…).”9) in Art. 5 (1) Basic Law. By constitutional proviso, this fundamental right may be restricted (“These rights shall find their limits in the provisions of general laws, in provisions for the protection of young persons, and in the right to personal honour.”), if the restricting law does not infringe the principles of proportionality (“Verhältnismäßigkeit”) and of interdependence (“Wechselwirkung”) with regards to speech-targeting purposes.10 However, the law does not need to be speech-targeting to be subject to the scrutiny of Art. 5 (2) Basic Law.
In the case of hate speech, there is no general provision but several articles in the German criminal code (“Strafgesetzbuch,” hereinafter StGB), the most prominent in the public debate being the criminal liability for denying the Holocaust under sec. 130 StGB. The latter forbids all incitement to hatred, in a “manner capable of disturbing the public peace.”11 Beyond this provision, due to Germany's unforgivable history and legacy, other provisions restrict speech to protect human dignity, which is enshrined in Art. 1 Basic Law and cannot be restricted. At the same time, the scope of protection of freedom of expression is very broad and Art. 5 (1) 2 Basic Law strictly prohibits censorship by the State. Hence, there is not less freedom due to speech-restricting laws but rather another understanding of hate speech harming the public discourse, than, for example, under the First Amendment's free speech clause.
This constitutional foundation is essential to understand that large social media platforms were blamed by significant parts of the German public for allowing hate speech.12 Instead of prioritizing individual liberty within the marketplace of ideas, the German model sets limits when human dignity or other fundamental rights are at risk. Since the biggest social media platforms are deeply rooted in the US system, they—in contrast—tend to have a different understanding of free speech.13 The latter might have an overspill effect on other platforms that are perhaps not based in the United States but follow the lead from Silicon Valley. Either way, the concept of free speech according to the First Amendment and the absolute freedom of platforms to govern their users' speech relating thereto are predominant in the area of content moderation.14 Adding to this, tackling hate speech on a global scale, across different jurisdictions and with the help of reviewers who are not always familiar with the national context is an enormous challenge.15 The rise of hate speech on social media platforms also affected Germany, with a (perceived) peek of hateful content in late 2015 when many refugees from Syria arrived in Europe and the German chancellor decided to grant them asylum. Merkel's decision marks the beginning of a significant increase in online hate speech, which is mainly attributed to racists and members of far-right parties, and the fear thereof.16 The issue of hate speech online became increasingly pressing because the platforms did not remove content fast enough or sometimes only after being publicly exposed.17 It became more evident by that time that the way in which the platforms moderated user-generated content was very opaque18 and that it was difficult to assess if they were doing everything they could to remove illegal (by German standards) content. Germany and other EU Member States first hoped to bring the intermediaries to solve the problem of hate speech, but these attempts of governing the platforms via self-regulation failed.19 In a nutshell, it became increasingly difficult for the German government to justify that the biggest social media platforms would allow “Nazi content” or not take it down quickly,20 and that the government had no hold.
The Reaction: The NetzDG
In his speech to the German Parliament in June 2017, the former Minister of Justice, Heiko Maas, explained that, given the increase of hate speech online by 300 percent between 2015 and 2017, and given the unwillingness of the big social media platforms to remove this type of content, the government had to regulate.21 He then presented the first draft of the Network Enforcement Act (hereinafter NetzDG)—a law that primarily obliges social media platforms to ensure that “manifestly unlawful” content would be removed within 24 hours. After a few amendments, the law was passed in late 2017 and came into force on January 1, 2018. The goal is to curb the spread of illegal content on social media by forcing the platforms to act upon illegal content according to national regulation and not solely according to the community standards they have.
To comply with the NetzDG, all platforms that count more than two million users in Germany need to implement a user-friendly complaint procedure and to remove “manifestly unlawful” within 24 hours, or seven days for less clear cases. Unlawful content is content that violates existing laws such as libel, defamation, incitement to hatred, and so on. No new law was written to fight hate speech online, in fact, the NetzDG “only” lists the provisions under which hate speech is forbidden and obliges the platforms to enforce them. The rationale of the NetzDG is to make sure that illegal content will not stay online more time than necessary and cause harm. Through this regulatory intervention, the German government hoped to tame the phenomenon of verbal coarsening online, respectively hate speech and its presumed negative effects on the public discourse. First transparency reports published by the companies concerned (by the NetzDG) show that hate speech is still the main reason for complaint and accordingly for takedown. The reports also show that the vast majority of complaints were taken care of within 24 hours and only (relatively) few cases were delegated to external institutions. The reports also give a lot of insights about the weak points of the NetzDG,22 but in general, the coming into force of the NetzDG provoked faster reactions to unwanted content (for better or worse23). This, in turn, raises questions as to the reasonable justifications of takedown decisions and possible overblocking effects due to the high fines (up to 50 million euros) for noncomplying with the NetzDG's obligations.24
Delegating Power to Platforms
The NetzDG has been under attack from the very beginning, for multiple reasons, some already mentioned earlier.25 One main concern is the “privatization” of the judiciary as a side effect of the platforms complying with the obligation to remove unlawful content within 24 hours.26 The obligation to remove unlawful content is in itself not problematic, but who gets to decide if user-generated content is “manifestly” unlawful? By delegating this task to social media platforms, the State has factually given the responsibility to decide upon the lawfulness of content to the reviewers in charge of content moderation. While the regulatory intent was to make the platforms take more responsibility for problematic and illegal content, it, as a result, led to a significant increase of power on the platforms' side. Given the complexity of the task (to apply national criminal law and interpret vague legal terms), many criticized that it is being delegated to the private companies running the social media platforms.27
Whether this was really causal for the takedown decisions in the time following the coming into force of the NetzDG, and whether it had an effect of over-removal of content and, consequently, chilling effects on users remains unclear.28 The German lawmaker included the obligation to publish biannual transparency reports for all platforms affected by the NetzDG. However, because the platforms implemented the NetzDG in different ways, the reports do not all provide reliable numbers29 and moreover suggest the opposite of over-removal.30 The question raised by the NetzDG is nevertheless more fundamental: may a state delegate this type of decision to the private host of online speech and leave this decision at someone's discretion who is not familiar with national law? When it comes to the specific elements of an offense, one would need to know and practice criminal law, and consider the context of the content.31 Also, using too vague terms such as “manifestly” can be problematic.32 Clear legal definitions and specific criteria are mandatory to constrain a platform's discretion.33 In turn, platforms should commit to disclosing their definitions and the way they put legal requirements into practice.34
The central finding of this is that the State delegates power to private actors to combat hate speech but is not precise enough in the wording of such transfer provisions. It hands over the power of evaluation and interpretation, creating a quasi-additional first instance, upstream from the Judiciary. Of course, citizens can still take action and contest a takedown decision in court. The initial takedown or stay up is nonetheless decided by the platform and, all in all, the user is subject to more rules in cyberspace—a development predicted by Palfrey. He identified four phases of Internet regulation, according to which we are now in phase 4 that is the phase of “contested access” (from 2010 onward).35 What Palfrey thought of, back in 2010, and how he predicted the phase of “contested access” is not so far from reality today. The Internet is no longer “free and lawless” and citizens see the digital as an integral part of their daily life. States have become more active in regulating cyberspace directly or indirectly. Intermediaries have to keep up with different regulatory frameworks and tend toward one-size-fits-all solutions that can collectively be more restrictive than single national regulations.36
Fake News and the EU
The Problem: Misinformation
Another issue on social media platforms and other intermediaries is the dissemination of misinformation, often referred to as fake news (although this term is unprecise and rather a buzzword). Fake news can be defined as “misinformation designed to mislead readers by looking like and coming across as traditional media.”37 Technical means such as social bots can be used to speed up sharing, make it more effective, or make it anonymous, without human intervention.38 When false information is designed and spread with the intention to mislead the recipient of the information, it is called disinformation.39 The element of intent is key when discussing the risks of disinformation for representative democracies because it raises the question as to the level of protection of freedom of expression and information as a precondition for participating in a democratic system. The intent of spreading false information is closely interwoven with the assessment of user data in order to identify target groups necessary to place targeted political advertising. However, political microtargeting and the associated phenomenon of “dark ads,” that is, political advertising that is only visible to certain users, do not fall within the narrow definition of fake news.
We are only now assessing to which extent disinformation might have affected past elections. Although the increasing usage of social media in the “leave” campaign for Brexit and during the US elections 2016 were critically observed by European lawmakers, the role social media platforms played appeared to its full extent after the French elections in 2017,40 and the real turning point was the Cambridge Analytica scandal in March 2018.41 Fake news is not a new phenomenon, but the issue has become more pressing since the increasing availability of data. The disseminators of fake news have benefited from social networks to reach more people, and from the technology that enables a faster distribution and can make it more difficult to distinguish fake from hard news.42 Through its loose privacy policies, Facebook had given access to users' data in an unprecedented way: the private information of more than 50 million individuals had been used to predict and influence the voting choices during the 2016 US presidential elections.43
The research conducted afterward showed that the harvested data was used to target voters with political ads and could be used to manipulate them, in a way that would be invisible to the public, bearing a high risk for democracy. The fact that a company like Facebook would allow third parties to use behavioral advertising for political purposes, or at least not take the necessary steps to protect users' data was shocking to everyone who learned about the privacy breach in March 2018.44 It also became clearer that Facebook's business model, that is, keeping users on the platform at all costs (attention economy), was more favorable to certain—more engaging—types of content and this might include fake news.45 In fact, the economics of user engagement explains why stories around the events of 2016 and 2017 were designed to be provocative and to catch users' attention.46 We cannot conclude from this observation that Facebook keeps fake news online on purpose but it might indicate that the algorithm favors content with high engagement potential. In recent elections in Brazil, India, and Israel, social media have been increasingly used to purvey false information. Misinformation does not create unseen social divides, instead, it is used to fuel existing tensions between political, social, or religious groups.47 It will, therefore, serve populist campaigns more than others. In Brazil's case, the use of the messenger service WhatsApp to propagate misinformation about the rivals of far-right candidate Jair Bolsonaro could have led to his victory.48
The impact of misinformation on election results is contested because it is difficult to measure. Only the engagement of users with different types of content can be quantified, not the impact it has in their political opinion, nor their reasoning when choosing a candidate or a party.49 Addressing this issue is particularly complicated for lawmakers because the problem of misinformation lacks a factual basis. The true impact on electoral behavior is hard to measure, which is why one has to be very careful when designing and enacting regulatory frameworks. Another challenge is the possible backfire of a regulation targeting fake news: it could have a negative effect on the freedoms it is aiming to protect.50 When it ought to protect the free formation of public opinion and the integrity of public discourse, such a regulatory intervention would also combat the lack of trust in media outlets, the general mistrust in traditional gatekeepers and politics. It would attempt to intervene in the “feedback loop”51 of misinformation without disrupting the free flow of information.
The Reaction: EU Code of Practice on Disinformation
Fake news can be assigned neither to a political color nor to a certain country of origin, even if there are indications that they partly come from abroad with the aim to interfere in the internal affairs of other countries, that is, to influence internal politics possibly to their advantage.52 In light of the elections for the EU Parliament in May 2019, disinformation and potential influence from right-wing populists were at the center of attention. Recent work has shown that the EU was not well enough prepared against disinformation,53 even if some experts say we should not overestimate the power of “fake news” in Europe by underestimating the ability of content recipients to identify false information.54 It was nevertheless a major concern, especially because the voter turnout for the EU Parliament has traditionally been below average and could have been pulled down by anti-EU content. Until recently, except for France, the EU and its Member States have not taken any action, leaving it to journalists to unpack, crosscheck, and reveal disinformation, thus making them the driving force when it comes to counter misinformation. Due to recent elections in other countries and the fear of intrusion from abroad, Member States are now considering regulatory interventions. In December 2018, France adopted a law against information manipulation in times of elections, introducing an additional form of expedited proceedings and higher transparency requirements as to political advertising on social media platforms.55 On the supranational level, the EU and leading tech companies (hereinafter signatories) agreed upon the self-regulatory Code of Practice on Disinformation (hereinafter CPD) in September 2018.56
The CPD defines disinformation as “verifiably false or misleading information” which, cumulatively, (a) “is created, presented and disseminated for economic gain or to intentionally deceive the public”; and (b) “may cause public harm,” intended as “threats to democratic political and policymaking processes as well as public goods such as the protection of EU citizens' health, the environment or security.” The CPD is organized in five relevant fields of commitments, each tackling the problem of disinformation from a different angle and with different measures. Essentially, the CPD encourages the signatories to implement self-regulatory standards to fight disinformation.57 They shall commit to “deploying policies and processes to disrupt advertising and monetizing incentives for relevant behavior” and commit to “enable public disclosure of political advertising.” The latter “could include actual sponsor identity and amounts spent.” They also commit “to put in place clear policies regarding identity and the misuse of automated bots,” as well as “policies on what constitutes impermissible use of automated systems.” To accomplish the goal of the CPD, namely “to address the challenges posed by the dissemination of disinformation,” the various signatories are allowed to operate differently, according to their respective modus operandi. Hence, they can choose if and how they comply with the commitments of the CPD, and to which extent they will go beyond their preexisting own set of rules in this area.
Self-Regulation as a Leverage?
So far, social media platforms have not reacted favorably to the demands of governments with regard to unwanted political misinformation. Political content ahead of elections was largely accepted as opinions and not subject to takedown.58 In addition, platforms offered their services in behavioral advertisement without questioning the content and were reluctant to intervene in the area of fake or nonhuman (bots) accounts. Again, because most platforms come from a First Amendment understanding of free speech, and because the boundaries of political speech are generally difficult to draw, removing this type of content can tip over to collateral censorship, which explains their reluctance to delete.59 Nevertheless, the EU chose a self-regulatory form of governance to address the issue of misinformation. In doing so, the EU entrusted the problem to the platforms, in their field of action and at their discretion. Self-regulation is generally speaking a process “in which rules that govern market behavior are developed and enforced by the governed themselves.”60 There might be an instrument of soft law as a basis for these self-regulatory activities, that is, an agreement on the basis of which companies are bound to the terms, but it has no legislative activity as its basis.61 Codes of practice provide detailed practical guidance on how to comply with legal obligations, and the signatories consent to follow this guidance unless another, higher standard is in place.62 However, the signatories are not bound to any specific deliverable, which leads to the question of whether this type of instrument is legitimate and sufficient. If self-regulation is the first step before lawmakers intervene, then it might not be an adequate answer to the problem of misinformation because the issue is simply too urgent.63
The signatories of the EU CPD all agree on the objectives and the suggested measures to take, but there is no legal obligation to and, if so, on how to implement them. The question which arises in the following step is how do we classify measures taken on the grounds of such codes? When might state action in the form of soft law be considered an indirect infringement of fundamental rights if it is implemented by private entities?64 Soft law instruments are flexible in terms of implementation but can be questionable when it comes to fundamental rights and the right to take legal action: who is to be held accountable for the implementation of the recommendations?65 This problem is not new: cooperation between the state and non-state actors based on agreements is often criticized when hidden from the public opinion. Shall we consider enforcement by proxy an “unholy alliance” or necessary cooperation between the State and private intermediaries?66 In a contemporary concept of state action should we also include private behavior that can be attributed to the state on the basis of its intention when it adopts “soft law”?67
Interim Conclusion
In this first part, I have looked at how lawmakers respond to contemporary challenges on social media platforms such as hate speech and misinformation. In both cases, the state allocated the enforcement of rules or of guiding objectives to the platforms, or at least it allocated forms of power, as in possession of control, authority, or influence over others.68 The two use cases show that—even after regulation or agreements—the platforms still govern the evaluation and interpretation of what actually constitutes unlawful content (in the case of the NetzDG), and over what type of false information might be harmful to democracy (in case of the CPD).
The platforms' definitions of problematic phenomena such as hate speech and fake news do not always correlate with the offenses or misdemeanors targeted by the law. User-generated content that violates the law might be unlawful under national law. If it infringes a platform's standards, it is unwanted. In many cases, that type of content will be both, unwanted and unlawful, but that heavily depends on a country's laws.69 From a European perspective, it seems quite natural that freedom of expression can be limited by law. Article 10 (2) European Convention for Human Rights (ECHR), for example, stipulates that “The exercise of these freedoms, since it carries with it duties and responsibilities, may be subject to such formalities, conditions, restrictions or penalties as are prescribed by law and are necessary in a democratic society, (…).”70 Similarly to the German Basic Law, a limitation of Art. 10 (1) ECHR can be justified with the pursuit of one of the goals mentioned in Art. 10 (2) or based on the additional reason for the limitation of Art. 10 (1) 3 ECHR.71 In the United States, the scope of protection is broader: speech that does not fall in one of the few categories of unprotected speech within the First Amendment is allowed as an opinion.72 Bearing in mind that the biggest social media platforms come from the United States and that their in-house and external legal counselors are most likely to be US-trained lawyers, the foundation for a platform's policies is free speech (non-) regulation under the First Amendment.73 This brings us to the question of how corporations running social media platforms react to the challenges inherent to communication on their platforms.
The Corporate Reaction
Hate speech and misinformation are two main issues for social media platforms, which they have so far addressed with more or less strict content moderation rules.74 The latter have evolved over the past years and due to a combination of circumstances (leaks and voluntary disclosure), the world was granted more insights to their complex sets of rules.75 In addition to moderating user-generated content according to their community standards, platforms are increasingly turning to countermeasures that resemble state action: they create rules, enforce them, “punish” those who break the rules not only by taking down unwanted content but also by restricting their access to the platform, sometimes withholding or deleting accounts. At the same time, they commit to more transparency regarding the rules of enforcement, as well as due process measures such as allowing appeals. While companies like Facebook sometimes bypass regulatory intervention,76 they create an apparatus that is on the surface similar to the state. The seeming similarity lies in the choice of instruments used to enforce rules and in providing users with procedural remedies. So far, such similarities are limited to external appearances: It seems like after a time of perhaps overestimating the users' tolerance as to the mistakes made, they are now more receptive to outside pressure.77 The public opinion could form a type of counterpower that companies might bow to when they sense the users' demand for clear and transparent rules.78
Governing Speech with Private Rules
While it might sound obvious, platforms are (until now) protected by their freedom to govern their relationship with users, that is, private autonomy. Whether this relationship is considered contractual or not (depending here again on the national perspective), the terms of this agreement and their enforcement are to a great extent at the platforms' discretion.79 At the same time, platforms are facing a higher societal pressure than a few years ago to behave according to their role in the new public sphere they have in part created.80 In Germany and the jurisprudence of the European Court of Human Rights, fundamental rights such as freedom of speech and information may have a certain horizontal effect.81 The latter can occur directly or indirectly, that is, by binding private parties as such to fundamental rights or via court decisions reached between two private parties in which the judges ought to include and consider fundamental rights in their deliberation and, accordingly, in their opinion. The novelty does not lie in the doctrine of the horizontal effect itself but rather in its interpretation when it comes to platforms. Due to the platforms' important influence on communication and access to information online, courts could be more inclined to identify a direct or indirect binding effect of freedom of speech and information in cases related to content moderation. Recent decisions by German courts on district and regional level show that judges are willing to balance the rights of users and platforms when users take action against takedown decisions, generally in favor of users' rights to express themselves when the contested content is not unlawful. In the following subsections, I will elaborate on the set of measures enforced against unwanted content, showing that this enforcement has so far not been different from the common practice between two private parties, but that it might be shifting toward practices usually attributed to administrative law.
Taking Down Content
Social media platforms decide which content they allow to be published and to remain visible according to their own set of rules (“Facebook Community Standards,” “Twitter Rules,” “Reddit Content Policy,” “YouTube Community Guidelines,” etc.). These rules are generally enforced via notice and takedown, a two-step procedure mainly due to laws that limit the platforms' liability for user-generated content.82 They are based on the assumption that social media platforms act as mere content-neutral intermediaries and should not be held responsible for bad content, when in fact they govern what type of content will be available and visible according to their standards and sets of rules.83 When users flag content and submit it for review, it will be reviewed by content moderators who eventually decide about deleting it or not, according to their own interpretation (often with a different cultural background, which is another concern in this area).84 Over time, the rules have been iterated and adapted according to the needs and the developments of each community.85 When the interpretation guidelines provide no answer to a specific content problem, the reviewers have to make ad hoc decisions, which might be escalated and revised to another team (generally more skilled).86 If the content is found to violate a platform's rules the consequence is removal, that is, in priority worldwide deletion or geoblocking in cases where the violation affects only a certain country or region.87
Restricting, Withholding, or Deleting Accounts
The following measures against violations of rules are more drastic than taking down contested content and will probably only be applied when the violation happens repeatedly. One sanction can be to restrict access to a platform's service, another to withhold or delete individual accounts. Sanctioning can also result in temporarily limiting the user's ability to create posts or interact with other users. In the past, it seems that platforms were reluctant to withhold or delete profiles, sometimes even leaving it to other users to act upon profiles that would disseminate hate speech.88 In the aftermath of the events mentioned earlier, such as Brexit and other election campaigns, Twitter started massively removing unwanted accounts89 and Facebook started removing so-called “bad actors” or “coordinated inauthentic behavior,” that is, multiple pages, groups, and accounts found to be “working to mislead others about who they are and what they are doing.”90
All in all, platforms mostly react to unwanted content on an individual level: their sanctions affect single users or single accounts. But of course, they can also enforce their rules by design, hence affecting the whole community and achieving broader results. To prevent users from making use of the services in an undesired way, platforms can restrict how the users interact with the front end. For example, when a messenger service restricts the number of contacts to which a message can be sent or when a network no longer allows to add nonbefriended accounts to groups, misinformation cannot be distributed as easily as before. WhatsApp is, for instance, restricting mass propagation of information to countless contacts, in reaction to the allegations over the Brazilian and Indian elections in 2018.91 Similarly, Facebook is restricting the use of groups that “repeatedly share misinformation.”92 It also augmented the role of moderators and restricts users from randomly adding other users to groups. Hence, the individual is not exposed to group activities that she does not actively subscribe to. This type of changes is not at the users' detriment since opt in solutions are generally preferable for consumers, compared to opt out.93 Design choices affect all users in the same way and do not target speech based on its content, which makes them less right infringing than content-based sanctions.
Restricting the Usage for Unwanted Purposes
To summarize, platforms have their own set of rules when it comes to user-generated content and react differently to users violating their rules. The sanctions differ in how much they affect users, but the bigger networks use similar sanctions to counter negative phenomena such as hate speech and misinformation. Content moderation does not resemble the mechanisms of administrative law, but rather of domestic authority. The companies owning a social media platform govern them via private rules and the interpretation of these is solely at their discretion. Therefore, the relationship between a user and a platform is more comparable to an agreement to use a room under certain terms and conditions. I base this argument on the fact that users often do not know why their posts were deleted, how they can appeal this decision, and to what extent they will be affected by the platform's sanction. Because content moderation is mainly perceived as a case-by-case review and not following procedural rules, it cannot be compared (so far) with any formal interaction with the state.
The Sound of Administrative Law
Since the Cambridge Analytica scandal, Facebook has adopted a tone that resembles that of the state and its actions tend to imitate governmental behavior (when describing its internal governance structure), that is, it announced structures and procedures similar to administrative law.94 First, it published the guidelines based on which reviewers decide about how to interpret community standards. Second, it announced an independent body of appeal which could potentially empower users as to their rights to contest takedown decisions and other sanctions. These two measures lead to more exposure, hence more accountability which is more common for the state, whereas private parties are not subject to principles such as due process.95 The reasons why companies engage in a stronger internal regulation can have various reasons, but it has a general countereffect on the State's activities in terms of regulating, inspecting, and so on.96 This observation can, for example, be applied to the area of content moderation, where social media platforms fear more interference from (European) lawmakers.97 The reforms decided by Facebook throughout 2018 were motivated by multiple reasons, certainly in part to avoid more laws comparable to the NetzDG. The extent to which Facebook intends to self-regulate and to delegate a field of influence that is so crucial to its business goes beyond what companies usually commit to in their self-regulatory declarations.98
More Transparency
A main point of criticism had been that people outside of Facebook could not comprehend how community standards were interpreted and enforced.99 In some cases, the decisions about takedown seemed incoherent, which confirmed the suspicion that they were made ad hoc and applying double standards.100 Facebook increasingly committed to transparency and came forward by publishing its body of internal enforcement guidelines.101 For hate speech, it is now slightly clearer how Facebook defines hate speech and which content falls under this definition.102 The guidelines disclose that, in case of doubt, Facebook will remove content which it has identified as hate speech and where a diverging intent is not recognizable (“Where the intention is unclear, we may remove the content.”). Also, Facebook attributes higher credibility to content that was posted under clear names, assuming that such users are more likely to not post hateful content (“we believe that people are more responsible when they share this kind of commentary using their authentic identity”). Nevertheless, categories such as hate speech remain broad and their definition vague.
As to the problem of misinformation, Facebook's guidelines for moderating false information are rather short, stating that it does not remove false news but reduces “its distribution by showing it lower in the News Feed.” Measures are mainly to disrupt “economic incentives” to “propagate misinformation” and to reduce “the distribution of content rated as false.”103
Implementing a Body of Appeal
As part of the changes in the aftermath of the Cambridge Analytica scandal, Facebook's CEO Zuckerberg announced in late 2018 the implementation of an appeals process for takedown decisions and later for any decision users would be subject to.104 An appeal's process would allow users to express their disagreement with any of the measures explained in the subsection earlier, giving them the chance to contest sanction. The way that the body of appeal would decide upon users' requests would be governed by an independent oversight board.105 The appeal's process and the oversight board are two distinctive elements but closely interconnected since the oversight board's work will potentially have a direct effect on the outcome of the appeal. According to the Draft Charter, it will “be a body of independent experts who will review Facebook's most challenging content decisions—focusing on important and disputed cases. It will share its decisions transparently and give reasons for them.”106 So far, the way content decisions shall be reached has not been finalized (“how to render independent judgment on some of Facebook's most important and challenging content decisions.”107). The Board will probably work in smaller panels populated by board members reflecting both diversity and local concerns.108 Many questions still need to be answered until an institution of that kind will actually be fully operational (planned for late 2019).
Although these developments altogether sound user-friendly and like striving for a more transparent and fair form of Facebook, one must bear in mind several deficits: the possibility to contest a decision does not automatically mean that users will know on which grounds the initial decision was taken in the first place. It also does not guarantee more transparency in terms of explainability as to granting or dismissing the appeal.109 The oversight board will not be able to review all of the appeals filed because there are simply too many takedown decisions. Instead, the aggrieved user will probably be allowed to file an appeal that will be submitted to content reviewers. Only in specific cases (probably those of fundamental significance for future cases110) will they be submitted to the Oversight Board. If the oversight board is not regarded as independent, its decisions will not ameliorate the way content is moderated on Facebook.
After a period of consultation, Facebook expressed the wish to have a fully operational oversight board by the end of 2019. The “Global Feedback and Input on the Facebook Oversight Board on Content Decisions” report (published in June 2019) shows that the selection process of the board members was one main concern expressed by the experts consulted. Per this report, Facebook plans to select the first round of approximately 40 board members for a period of three years and leave it up to them to select their successors.111 The cases for review will “come before the Board through two mechanisms: Facebook-initiated requests and user-initiated appeals.”112 As to the effect of the decisions made by the Board, Facebook commits to be fully bound “on the specific content brought for review” and recognizes that the Board's decisions “could potentially set policy moving forward,” which means that they will have “precedential weight” while still allowing the Board to differ (“flexibility”).113 It remains to be seen which parts of this report will eventually be perpetuated as the structure of the Board. Lastly, there is no certainty about how appeals will be granted and remedies implemented. Put-back rights do not per se include the right to be visible in the same way as a post would have been, without the initial takedown. Of course, a remedy granted after a successful appeal could include being published where the initial content was supposed to appear, but considering the high density of online content, such a form of remedy would not be sufficient because of the lack of visibility. The appellee might commit to republish the content and to make it visible in the way it was initially published—assessing the implementation of such a commitment will still be very difficult given the nature of each user's individual algorithmic selection. This issue of put-back right versus visibility is often been overlooked, although it is so central when thinking about the remedies against unsubstantiated removals.
The novelty of Facebook's Oversight Board does not lay with the instauration of a separate board or a commission overseeing the activities of a company's board of executives. This type of control mechanisms can be found in countries following a two-tier system in which the supervisory board is composed of nonexecutive directors.114 It is the setup of a sort of Court, whose decisions have an impact on the company's daily business and that is open to users' requests, that is in itself unprecedented and shows the wish to gain the public's trust at last. The judiciary and its independence from other sources of power, as an inherent feature of its own power, is something that one would want to follow because it symbolizes high integrity and legitimacy when it comes to the decisions it takes and, in general, checks and balances. Hence, it seems like an attractive model to follow. The social logic of courts has been studied in the past and needs to take a bigger part in this conversation.115 In order to take credit for giving up responsibility and power, that is, subordinating content moderation policies to decisions of a board of people who are independent of the corporation, one would need to follow certain guidelines. It is not enough to call such oversight body “independent” and to delegate accountability for unpopular decisions. This question, however, is more substantive: how does one define the judiciary's independence and which requirements would a private body have to fulfill to be truly comparable to a court, or rather is that even conceivable in our legal systems. Just as state authority cannot be endorsed by a private actor (or only up to a certain degree), the explicit authority of the judiciary is beyond question.
Conclusion: From Private Ordering to a New Administrative Law?
The Described Actions by States and Companies Create Synergies
While states are still struggling to identify adequate answers to the regulatory challenges posed by communication, intermediaries and especially social media platforms are increasingly considering the implementation of procedural rules in order to assume their responsibilities. Is the platforms' enforcement of their own rules becoming the administrative law of this genre of the public sphere? Are we witnessing a real shift of responsibility through the private actors, as many scholars have been urging?116 Platforms moderate content and sanction users for violating their terms of service on the grounds of rules that are increasingly disclosed to the public. Takedown decisions on an individual level may—at least in Facebook's case—now be appealed. The case could then potentially be escalated to a higher instance, such as the highest appeal body (similar to a higher level of jurisdiction). This notice of appeal would require a full disclosure of the reasons why the initial takedown decision was made so that users can justify why they filed an appeal. The decision reached by the highest appeal body (e.g., “Oversight Board”) would be binding for all content moderators (the Executive), who could eventually be forced to cancel the removal. All in all, the procedure would be like administrative procedures but within a private corporation.
If we consider how both Germany and the EU transfer the power of evaluation and interpretation to platforms on one side, and the way platforms establish mechanisms supposed to insert transparency and due process on the other, two main conclusions can be drawn. First, states and corporations seem to follow the same substantive goals in terms of hate speech and misinformation, even if so for different reasons. Second, they seem to move toward each other, which could end up in eventually meeting halfway. To the first point: just as states have (partly) forbidden hate speech and disinformation by law and through jurisprudence before the social web happened, some are now regulating online speech (directly or indirectly) in order to protect democracy and for reasons of social cohesion. The companies running social media platforms are equally not in favor of contentious content such as hate speech or misinformation, because they do not want to lose customers (which is why they commit to moderating content in the first place).117 Although so-called clickbait content might be beneficial to their business model, contentious content is in the long run not attractive to users. In light of this finding, platforms will perhaps consider building a more sustainable business model.118 Either way, it is now in every actor's (state or private) interest to act against hate speech and misinformation (at least within their own sphere of influence). This might include new structures such as transferring power of interpretation (NetzDG, CPD) or building parallel evaluation structures (individual takedown decisions, appeal procedures). Hence, even if Facebook's proposal is largely perceived as a blame-shifting campaign, it would factually help to clear the company from the blame it earns for the way it moderates content119 and could result in being a turning point in social media governance.
States and Companies Sharing Responsibilities?
To my second point: States and platforms are moving closer to each other in the way they govern online speech, although a stronger trend can be observed in the platforms adopting mechanisms traditionally allocated with state actors. This is different from the idea of shared responsibility in the concept of coregulation, in that it is not precisely based on a regulatory arrangement.120 On the contrary: it might be the lack of regulatory arrangement that pushes private platforms to adopt this type of state-like procedure. Coregulation as a form of cogovernance between platforms and the State relies on a regulation that leaves more than average room for the affected private actor to implement.121 Measures of coregulation provide more guidance and, accordingly, more accountability than self-regulatory measures which goes hand in hand with less autonomy for the parties concerned. The fact that private actors are aiming for more accountability could partly be due to the uncertainty of intermediary liability on a global scale: on the one hand private actors can—within the leeway of private autonomy—design their own contractual relationship with customers, and on the other they ought to comply with specific intermediary regulation in some countries, but not in other. It usually takes time to comply with a new regulation because it first needs to be filled with meaning via interpretation. This uncertainty is maximized when private actors operate in multinational contexts because of the potential compliance obligations, hence it could incentivize them to proactively steer a course of self-regulation.122 The question is whether the situation described earlier as “meeting half-way” is different from what has been observed so far when a new regulatory challenge was identified. The hitherto existing scholarship on legal endogeneity has focused on theorizing how private actors comply with new regulation and how they fill vague terms with their interpretation until courts confirm or change this meaning.123 Although the preambles and explanatory memorandums of new laws give background information and help translate nonspecific terms into corporate policies that align with the initial regulatory goals, there is an interpretation gap that private actors can fill at first. Gilad characterized as “managerialized” a sequence when companies fill the room left by the law with managerial values and goals.124 The “managerial” layer is the subsequent transposition of the regulatory goals into business reality.
However, the present situation is different in that private actors fill their own corporate rules with state-like procedures instead of managerial values and goals. They do not await a regulatory act which would infuse the contractual relationship between users and platforms with principles usually known from constitutional law such as the protection of liberties. Turning toward structures and rules that resemble those of state actors is uncommon because it usually is not at a private actor's advantage. Corporate codes of conduct are, for example, perceived as a substantive form of self-regulation but it is still uncommon for private actors to commit to mechanisms that only state actors endorse because of the obligations that come along.125 The more so as the strict regime of administrative law lacks the flexibility of private law and is subsequently less favorable to those actors it is imposed upon. Therefore, it seems counterintuitive that private actors in a regime of self-regulation would fill the broad room for interpretation (granted on purpose by the legislator) with rules commonly associated with the strict scrutiny imposed upon state actors. At the same time, it suggests higher procedural accountability,126 generally associated with “better rules,” that is, “facilitating adherence to public interest goals and constraining diversion to private interests.”127 The fact that administrative law incorporates the principle of the rule of law and of due process makes it attractive when the values one wishes to convey are trust and transparency.128
When using the term administrative law I specifically do not mean the area of digital constitutionalism.129 Of course, one cannot fully separate administrative law in the context of social media platforms from the broader conversation on Internet governance and global constitutionalism but it is more how non-state actors incorporate mechanisms of administrative law that I wish to highlight here. Other researchers have examined whether large platforms such as Facebook could be treated as a state, comparing their set of rules to a constitution.130 Although this approach acknowledges the power that big tech companies have on users worldwide, assimilating them with or treating them as states might not be the way forward. Nonetheless, due process and fundamental rights should play a bigger part in the process of content moderation and could be equally incorporated by states and companies.131
This article has shown that the rules adopted by the State and the platforms might converge to some extent, as opposed to the platforms and the users. Helberger, Pierson, and Poell thought of convergence of platforms and users as “a cooperative responsibility,” based on the idea that “platforms and users need to agree on the appropriate division of labor with regard to managing responsibility for their role in public space.”132 In opposition to this proposition, the developments explained in the present article show that cooperation is factually happening between the State and the platforms beyond regulatory arrangements because both are governors.133 In my opinion, users are important actors in this matter but they should not be allocated with more responsibility than provided by the law already. Users can be held accountable for their behavior and the way they communicate online, but not for the structural deficiencies of platforms. Because users generate the content that creates the basis of transactions in the attention economy they are sufficiently contributing to that ecosystem. They produce and consume content, spend time on social media platforms either way, which generates more data beneficial to the platforms hosting the content.134 Users in this constellation are consumers and citizens but not cogovernors or coregulators. Nonetheless, the general idea of sharing responsibilities in order to achieve better results is also the driving idea behind this article.
Then again, the shared responsibility between state and non-state actors should not result in an additional threat for freedom of speech. Indeed, cooperation between states and social media platforms has been criticized in the past as enhancing the risk of collateral censorship, especially when operating in an opaque legal framework.135 To prevent any further development in that direction, governments should carefully monitor self-regulation and foster transparency. Furthermore, they should scrutinize whether the measures taken by private actors against hate speech or misinformation are in line with the goals set.
Proactively Avoiding More Governmental Intervention
All in all, the vision of private actors committing to democratic values and fundamental rights as much as possible is appealing but are the responsibilities really shared between states and platforms? Facebook's example shows that the more policies resemble and the more they seem to be inspired by principles and human rights frameworks of representative democracies, the more will they inspire users' confidence and appear as an act of goodwill.136 Making decisions appealable and, more importantly, introducing the concept of separation of power within a private actor, is a tendency toward state-like structures.137 Accepting to be bound to and to implement the decisions reached by an independent institution, just as public authorities are bound by court decisions when both sides are part of the state, is not common to corporations. If this multiplies throughout the industry—at least among the large tech companies that provide intermediary services—it could potentially influence how social media platforms integrate (or plan on integrating) procedural rules, for example, allowing users to file appeals and to request remedy mechanisms in their moderation policies.
Although there could be positive effects of such developments, one needs to bear in mind the inherent risk of whitewashing. When companies create an external body to whose decisions they are subordinate, it can be misused as a lightning rod. There is a stronger tendency of the platforms toward policies that resemble administrative law than vice versa. The two use cases in Germany and the EU show that the State first and foremost delegates to private actors, although self-regulation does not suffice in the context of content moderation.138 More specifically, instruments of soft law are not sufficient if the goal is to force the targeted actor to actually perform. Instead, unenforceable practical guidance will leave them the freedom to pursue their own agenda—for better or worse. Only when regulation stipulates “sticks”—that is, financial disadvantages such as the high fines under NetzDG—will the provisions be implemented. This is neither an obligation to adopt similar regulatory measure, nor is it necessarily the optimal way to go with regards to collateral censorship, but it demonstrates the weakness of soft law (i.e., self-regulation) and raises questions as to the duty of legislators to react more firmly on private actors when democratic principles are at stake. It remains to be seen if the new self-imposed mechanisms are the best way forward for social media platforms and especially for their users.
Footnotes
Cox.
Tucker et al.; Kaye, 7–8.
Benkler, Faris, and Roberts, 362.
Definition by Lexico, retrieved from https://www.lexico.com/en/definition/regulation, accessed September 10, 2019.
The terms social media platforms and platforms will be used as synonyms hereinafter, defined as general-purpose platforms for social communication and information sharing (as in Helberger, Pierson, and Poell, 1) and intermediaries as the generic term for Internet access providers and hosts.
Definition from Merriam Webster, retrieved from https://www.merriam-webster.com/dictionary/hate%20speech, accessed April 22, 2019.
Waldron, 9–10.
Ibid., 13.
Official translation, retrieved from https://www.gesetze-im-internet.de/englisch_gg/englisch_gg.html#p0037, accessed April 22, 2019.
It also needs to be constitutional in a more general sense, but these are specific conditions to Art. 5 Basic Law.
Official translation, retrieved from https://www.gesetze-im-internet.de/englisch_stgb/englisch_stgb.html#p1246, accessed April 22, 2019.
Kinstler, “Can Germany Fix Facebook?”
Klonick, 1621.
Content moderation refers to the way social media platforms handle user-generated content.
Roberts, 2; Gillespie, 8–9.
Faus and Storks, 14.
Which is also related to the liability exemption under the EU E-Commerce directive: Kuczerawy, “Intermediary Liability & Freedom of Expression,” 48–49.
Constine.
Kinstler, “Can Germany Fix Facebook?”; Palfrey, 990.
I use takedown as a generic term for the removal of content, whether it gets deleted or blocked. In some contexts, this distinction is relevant, for example, in the context of the NetzDG, companies delete content that violates their community guidelines and geoblock content that is a contravention of German laws. However, this distinction does not add to the argument of this article which is why I will not elaborate further here.
Speech delivered on June 30, 2017, retrieved from https://www.bmjv.de/SharedDocs/Reden/DE/2017/06302017_BT_NetzDG.html, accessed April 22, 2019.
Heldt, “Reading between the Lines and the Numbers.”
Kinstler, “Germany's Attempt to Fix Facebook.”
Kaye, 7; Richter, “Das NetzDG—Wunderwaffe gegen „Hate Speech“ und „Fake News“.”
Cf. Schulz.
Guggenberger, 2582.
Schmitz and Berndt, 7; Wischmeyer, 15–16; Buermeyer.
Echikson and Knodt, 11.
Heldt, “Reading between the Lines and the Numbers.”
Wischmeyer, 20.
Wimmers and Heymann, 100.
Nunziato, 392; Belli, Francisco, and Zingales, 52; Citron, “What to Do About the Emerging Threat,” 3; Kaye, 10.
Nolte, 556–58; Liesching, 27; Wischmeyer, 15–16.
Citron, “What to Do About the Emerging Threat,” 5.
Palfrey, 991–92.
Kaye, 12.
Waldman, 849.
Bradshaw and Howard, 10–11.
Definition from Merriam-Webster, retrieved from https://www.merriam-webster.com/dictionary/disinformation, accessed April 30, 2019.
Bakir and McStay; Benkler, Faris, and Roberts; Lazer et al.; Timberg and Romm.
Vogelstein and Thompson.
Bradshaw and Howard, 16.
Cadwalladr and Graham-Harrison; Vogelstein and Thompson.
Cadwalladr and Graham-Harrison.
Bakir and McStay, 165.
Bradshaw and Howard, 11.
Barel.
Belli; Iglesias Keller.
Benkler, Faris, and Roberts, 384.
Bradshaw and Howard, 16–17; Helberger, Pierson, and Poell, 7.
Waldman, 851.
Howard et al., 39.
Investigate Europe Team; Schmidt, Schumann, and Simantke.
Hölig.
LOI n° 2018-1202 du 22 décembre 2018 relative à la lutte contre la manipulation de l'information, du 23 décembre 2018, retrieved from https://www.legifrance.gouv.fr/affichTexte.do;jsessionid=97BB2F082E6CF33B3399966E8A6CE9BD.tplgfr31s_3?cidTexte=JORFTEXT000037847559&categorieLien=id, accessed April 24, 2019.
EU Commission.
Similar to the Honest Ads Act, a bill proposed to the U.S. Congress in 2017 and still in progress, retrieved from https://www.congress.gov/bill/115th-congress/senate-bill/1989, accessed May 1, 2019.
See infra.
Helberger, Pierson, and Poell, 8.
Latzer, Just, and Saurwein, 376.
Cini.
Wikipedia, retrieved from https://en.wikipedia.org/wiki/Code_of_practice, accessed April 25, 2019.
Bradshaw and Howard, 16.
Andreas Voßkuhle, Anna-Bettina Kaiser, ‘Der Grundrechtseingriff’ [2009], Juristische Schulung, 313; Julian Staben, Markus Oermann, (2013)‘Mittelbare Grundrechtsreingriffe durch Abschreckung?—Zur grundrechtlichen Bewertung polizeilicher „Online-Streifen“ und „Online-Ermittlungen“ in sozialen Netzwerken’, Der Staat, 630, 637.
Latzer, Just, and Saurwein, 375, about the potential disadvantages of self-regulation as compared to state regulation.
Birnhack and Elkin-Koren, 14.
See also Kuczerawy, “The Power of Positive Thinking,” 235.
Definition from Merriam-Webster, retrieved from https://www.merriam-webster.com/dictionary/power, accessed April 26, 2019.
Keller, 8.
Retrieved from https://www.echr.coe.int/Documents/Convention_ENG.pdf, accessed April 27, 2019.
Cornils, para. 52.
Jackson, 136.
Klonick, 28.
Citron, “What to Do About the Emerging Threat,” 2–3; Langvardt, 1358–63.
Constine.
For example, Facebook's (non-) compliance of the NetzDG obligation to implement a user-friendly and accessible complaint procedure: Heldt, “Reading between the Lines and the Numbers,” 11. In July 2019, the German Federal Office of Justice fined Facebook for underreporting complaints that would fall in the NetzDG's scope of application, see among many Deutsche Welle retrieved from https://www.dw.com/en/germany-fines-facebook-for-underreporting-hate-speech-complaints/a-49447820, accessed September 10, 2019.
Klonick, 1649–50.
Benkler, Faris, and Roberts, 22.
For more on the platforms' private ordering, see Belli and Venturini, 4.
Castells, 410–28; Balkin; Keller.
Namely the “mittelbare Drittwirkung der Grundrechte” in Germany and the “théorie des obligations positives” in the ECtHR's jurisprudence.
For example, section 230 Communications Decency Act in the United States and Art. 14 E-Commerce Directive in the European Union.
DeNardis and Hackl, 766.
Roberts, 2–3.
Constine.
Koebler and Cox.
Ibid; Heldt, “Reading between the Lines and the Numbers,” 8.
Farkas and Neumayer.
Timberg and Dwoskin.
Facebook Newsroom, December 6, 2018, retrieved from https://newsroom.fb.com/news/2018/12/inside-feed-coordinated-inauthentic-behavior/, accessed April 29, 2019.
“A test to limit forwarding that will apply to everyone using WhatsApp,” launched on July 19, 2018 and last updated on January 21, 2019, retrieved from https://blog.whatsapp.com/10000647/Weitere-%C3%84nderungen-an-der-Weiterleitungsfunktion?lang=en, accessed July 21, 2019.
Facebook Newsroom, “Remove, Reduce, Inform: New Steps to Manage Problematic Content,” April 10, 2019, retrieved from https://newsroom.fb.com/news/2019/04/remove-reduce-inform-new-steps/, accessed July 2, 2019.
Lai and Hui, 260; Smith.
Facebook Newsroom, April 24, 2018, retrieved from https://newsroom.fb.com/news/2018/04/comprehensive-community-standards/, accessed April 29, 2019.
Latzer, Just, and Saurwein, 375.
Malhotra, Monin, and Tomz, 19–20.
“Why big tech should fear Europe,” The Economist, March 23, 2019, retrieved from https://www.economist.com/leaders/2019/03/23/why-big-tech-should-fear-europe, last accessed July 2, 2019.
Douek.
Crawford and Gillespie.
Kessler.
Facebook's Community Standards, retrieved from https://www.facebook.com/communitystandards/, accessed April 30, 2019.
Facebook's Policy Rationale for hate speech, retrieved from https://www.facebook.com/communitystandards/hate_speech, and Facebook's Hard Questions on hate speech, retrieved from https://newsroom.fb.com/news/2017/06/hard-questions-hate-speech/, accessed April 29, 2019.
Facebook's Policy for False Information, retrieved from https://www.facebook.com/communitystandards/false_news, accessed April 30, 2019.
Zuckerberg.
Zuckerberg referred to the oversight board as a “Supreme Court of Facebook.”
Facebook.
Harris.
“Global Feedback & Input on the Facebook Oversight Board,” 24–25.
So far, Zuckerberg announced “We're also working to provide more transparency into how policies were either violated or not.” See Zuckerberg.
FB's Oversight Board charter June 27, 2019?
“Global Feedback & Input on the Facebook Oversight Board,” 17, 21.
Ibid., 22–23.
Facebook, 3; Ibid., 27.
See, for example, sec. 30 German Stock Corporation Act and Art. 51 Company Law of the People's Republic of China.
Shapiro, 1–64; Grossman.
Pasquale, 512–13.
Klonick, 1664.
Helberger, Pierson, and Poell, 8; Burt.
Klonick and Kadri; Douek.
Latzer, Just, and Saurwein, 377.
Gorwa, 864.
Grajzl and Murrell, 523; Léonard et al., 174.
Edelman, Uggen, and Erlanger.
Gilad, 136.
Jenkins, 8.
Ogus, 643.
Ibid.
Haufler, 85; Walker, 150.
Gill, Redeker, and Gasser; Padovani and Santaniello; Suzor.
Celeste, 3–4.
Suzor, 4; Kaye, 14–15.
Helberger, Pierson, and Poell, 2.
Klonick, 1603.
Zuboff, 199–200; Burt.
Citron, “Extremist Speech”; Birnhack and Elkin-Koren; Elkin-Koren and Haber; Heldt, “Upload-Filters: Bypassing.”
Haufler, 85.
Teubner, 20.
Nurik, 2893; Lievens, 87.