Abstract
Social bots, automated agents operating in social networks, are suspected of influencing online debates, opinion-formation processes and thus, the outcome of elections and votes. They do so by contributing to the dissemination of illegal content and disinformation and by jeopardizing an accurate perception of the relevance and popularity of persons, topics, or positions, through their potentially unlimited communication and networking activities, all under the false pretense of human identity. This paper identifies and discusses preventive and repressive governance options for dealing with social bots on state, organizational, and individual levels respecting the constitutional provisions on free expression and opinion-formation.
The influence of social bots on the results of elections and votes as well as on debates about issues such as vaccinations or migration is attracting more and more public and political attention. It is argued that social bots, automated agents operating in social networks, may contribute to the spread of defamations, disinformation, and conspiracy theories and distort opinions by feigning false urgency and popularity of issues and persons.1 There are also concerns that, with social bots becoming more similar to human users in appearance and function in the future, they will be even more difficult to identify, and their manipulative potential will continue to increase.
The scientific debate about social bots also demonstrates largely critical findings. Signs of interference in political discourses, opinion-formation, and decision-making through the use of social bots of both domestic and foreign origin were found in several cases, including in elections in the United States,2 Japan,3 Germany,4 and France;5 in referendums in Great Britain,6 Switzerland,7 and Catalonia;8 in conflict situations such as in Ukraine,9 Syria,10 and Mexico;11 or in connection with controversial issues such as vaccinations.12 A study by Varol et al. estimated that 9% to 15% of all profiles on Twitter were bots.13 Values of up to 20% were calculated for the 2016 US election campaign.14 Simulations also showed that within a highly polarized setting, even a small number of social bots can be sufficient to tip the opinion climate over.15
When systematizing the challenges in connection with social bots discussed in public, political, and scientific debates, three specific problem areas can be identified16:
The dissemination of illegal content and disinformation
Under the false pretense of human identity
By means of a potentially unlimited number of communication and networking activities
The causes of these problems lie not only with the initiators and programmers of social bots, they can also be viewed as the consequence of and gaps in the allocation of responsibility for organizational or individual actions of various actors with asymmetrical resources and possibilities to implement measures and thus as a “problem of many hands.”17 These actors include legislators who consciously or unconsciously fail to adopt standards against malicious bot activities, operators of social networks (including their management, programmers, etc.) whose network infrastructure and terms of use allow or even promote malignant bot activities, and social network users who intentionally or negligently disseminate or endorse such activities.
Given that the responsibility for collective action cannot be assigned to a single actor, the networked nature of the adopted measures and that the areas in which social bots as well as their initiators operate are not confined by territorial or national borders,18 the solutions for these problem areas can only be worked out by taking into account different levels of regulation and the interactions among the aforementioned actors. This article, then identifies and discusses from a governance perspective19 preventive and repressive state, organizational, and individual measures against (potentially) harmful activities of social bots that have already been implemented or are conceivable. Due to a (so far) missing intergovernmental regulation of platforms in general and social bots specifically, the discussion on possible state interventions refers to national legislation. In particular, the article discusses the limits and possibilities of the Swiss Federal Constitution (Cst.), especially the provisions on free (political) expression and opinion-formation. Since these provisions also have constitutional status in many other democratic states, the following explanations are not limited to Switzerland, but can also be considered as pars pro toto, even though the legal basis for bot regulation is of special significance in Switzerland because of its numerous possibilities for direct democratic participation including popular votes several times a year. In addition, however, national regulations of other states or associations of states (e.g., United States, Germany, and European Union [EU]) are also addressed if they have taken or are discussing a measure explicitly targeted at social bots.
To this end, we will next briefly outline the key features of social bots. The potential impact of social bots with regard to opinion-forming processes in social networks and the necessity of regulatory measures is discussed in the section Mechanisms of Social Bots. In the subsequent section we will then discuss options for the governance of social bots, taking into account state, organizational, and individual measures and actors.
Definition and Key Features of Social Bots
Social bots fall into the broader category of bots, which are generally defined as autonomous, reactive software agents capable of perceiving signals and performing actions in a specific online environment for the purpose of pursuing an agenda20—that is usually specified by a programmer. Social bots can be distinguished from other bots by the nature and intended effect of the activities they perform. These include web robots that collect information through crawling and scraping, chatbots that are used as human-machine communication systems mostly in commercial settings, or spambots that are deployed to direct users to compromised websites.21 The literature further differentiates between social bots and hybrid forms such as trolls or cyborgs, which combine automation with human profile behavior and can pursue similar goals as social bots and are sometimes used in combination with them.22
Social bots are computer algorithms that automatically produce and distribute content in social networks or online forums and interact with human users, as well as other bots, thereby imitating human identity or behavior, in order to (possibly) influence opinions or behavior.23 Social bots operate on behalf of individual or collective actors, for example, for political or commercial motives. They are often associated and investigated in connection with the deception of users, the spread of rumors, defamations, or disinformation, but they can also be employed for nonmalicious purposes when, for example, they automatically aggregate and disseminate content from different sources.24
Depending on how social bots are programmed, they can perform different types of operations to pursue their goals, ranging from liking accounts to identically distributing specific content to learning new behavior patterns. Despite advances in machine learning and artificial intelligence (AI) social bots cannot be considered “full ethical agents”25 at present as they cannot (yet) be attributed consciousness, free will, and intentionality.26 With the delegation of human agency within a predefined or expanding scope of action, social bots gain a limited autonomy that can affect the construction of reality and the behavior of users of social networks, leaving open questions about the responsibility and liability for actions performed by social bots.27
Due to the false pretense of human identity and imitation of human behavior, the identification of social bots poses a complex challenge. In addition, the existence of hybrid forms (e.g., trolls) as well as the lack of a consensus on from which degree of automation an account is to be considered a bot further complicate an unambiguous classification.28 For this purpose, numerous scientists have developed classifications and recognition programs,29 including the Botometer30 that is frequently used in academic research. Using a variety of profile, network, and behavioral data and based on machine learning, it computes a probability value for any Twitter profile that indicates whether a particular user is a bot.31
However, doubts have been raised as to the validity of the results of studies that have used Botometer to detect social bots, which in many cases contain a not insignificant number of false positives (i.e., profiles identified as bots that are not actually bots) or false negatives (i.e., bots identified as human profiles).32 Botometer is based on a machine learning process that uses a variety of characteristics of profiles from a training data set to identify patterns that distinguish bots from human users. Based on this, Botometer computes a probability that a profile is a fully automated account. This would include, for example, the account of a newsroom, which automatically distributes all published articles via Twitter. It is therefore not recommended to choose a random threshold value, but to determine and justify it depending on the data set used.33
Mechanisms of Social Bots
Social bots unfold their effect on the basis of their own features and capabilities and in interaction with characteristics of the network architecture and the algorithmic selection logic of platforms as well as the individual receptivity of social network users, which are briefly explained in the following.
Automation
Automation is one of the constituent features of social bots and allows their activities to be restricted only by physical limits to the transmission and computational processing of signals, and by the design and architecture of an online environment. For instance, a less sophisticated social bot automatically retweets any post that contains a specific hash tag. In doing so, it potentially expands the distribution of this particular contribution by its own network of followers. More complex social bots, on the other hand, can draw on a versatile repertoire of options for action and adaptation in order to create publics in social networks and influence the flow of information or the perception of people or topics. They can create and post original content, imitate time patterns of use, copy profile information and usernames from other users with slight changes, and seek interaction with other users.34 Furthermore, social bots can, for example, specifically misuse “benign” bots in order to promote the spread of false messages, whereby these unintentionally enter into the service of the initiators of the “malignant” social bots. As a result, an unambiguous and temporally persistent categorization into “good” or “bad” bots is difficult, if not impossible, whereby the normative subdivision of this categorization is per se highly complex, especially since the detection and proof of deception, manipulation, or disinformation are not carried out value-free. Further developments in the field of AI suggest that the predictability of actions35 as well as the recognition of social bots by humans or algorithmic systems will become even more difficult.36
Distortion of Popularity Indicators
By feigning human identity and due to the potential of unlimited communication and networking activities through automation, social bots can distort popularity indicators such as number of followers, likes or retweets of people, topics, or positions.37 For example, on Twitter, they can make use of its asymmetrical network structure, which allows one profile to follow another without its explicit consent. At the individual user level, popularity indicators can serve as cue for the relevance or the credibility of a profile or (the source) of content and thus affect its processing.38 According to the Elaboration Likelihood Model,39 source characteristics can influence the extent of cognitive engagement with and the persuasiveness of content.40 Further, popularity indicators can stimulate the selective exposure to and the use of content41 as well as activities such as liking or sharing,42 which can ultimately lead to a self-reinforcing effect.43 In that popular profiles, content and positions can suggest reflecting the majority's position, they can also promote the effect of the “spiral of silence,”44 thus discouraging or preventing supporters of the supposed minority opinion from publicly expressing their opinion for fear of social isolation.45 This tendency has been observed among the users of social media affecting their willingness to express their opinion in both online and offline settings.46 Simulations have shown that in a highly polarized setting even a small number of social bots can succeed in tipping the opinion climate over and, as postulated by the theory of the spiral of silence, suggesting supposed majorities that do not exist in reality.47 At the technological level, increased metrics also affect the algorithmic selection logic of social networks: Contents or profiles with many search queries or high or rapidly growing popularity values are more likely to be recommended and displayed to users, which again can affect their reception and behavior.48
Reach Extension
Social bots can not only distort popularity indicators and thus affect how people, positions, and content are perceived, their activities can also lead to an increase in their reach within and possibly also outside social networks, for example, through interpersonal offline communication activities of users.49 In this context, different practices and strategies have been identified that make use of diffusion factors such as time and the centrality of nodes in a network. Shao et al., for example, were able to demonstrate through an analysis of Twitter data that the diffusion of false messages from low credibility sources was facilitated by the retweet activity of several social bots within seconds after publication (and thus before any review by fact-checking organizations, media organizations, or other users).50 Considering that the perception of the correctness of implausible news or even of news explicitly marked as false grows with increasing exposure frequency,51 this strategy appears all the more successful. Especially since there is empirical evidence that human users are as likely to share contents provided by social bots as by human users52 and that complex contagion processes (i.e., the propagation of information through activities such as retweets, shares, or likes by several users), promote the diffusion of messages.53 In addition, this distribution mechanism is supported by social bots targeting influential users (with many followers), for example, by using the response and mention functions, which at the same time provide a link to a false message.54
Previous research suggests that the production of disinformation and the targeted use of social bots to promote its dissemination, especially during the last days and hours before an election or vote, are arranged and coordinated by the same initiators.55 By deceiving and targeting human users and supported by recommendation algorithms, they can achieve a faster and more effective diffusion of content. The reasons why people interact with and also share content from social bots are not yet well researched, that is, whether they do so because they do not recognize social bots or whether they regard their nonhuman nature as irrelevant. Studies in the tradition of the “Computers Are Social Actors” approach,56 which deal with whether people unconsciously apply the same social rules and heuristics in dealing with computers, or in this case social bots, as in dealing with people, can provide an indication of this: Edwards et al. showed in an experiment that bots are perceived as credible, attractive, and competent—qualities normally associated with people.57 The participants also expressed an equally strong intention to interact with a bot or a human user on Twitter.58 In addition, it has been demonstrated that users show a similar level of effort and motivation in information search and cognitive processing when they receive information from a bot or a human user.59 People use the same processing strategies when interacting with social bots as they do when interacting with people, they retweet or like their posts and attribute credibility to them.
The Governance of Social Bots
As has been argued, social bots have the potential to influence the democratic opinion-forming and decision-making processes to an undesirable extent. The central challenges identified are: (1) the dissemination of false and/or unlawful content, (2) under the false pretense of human identity, and (3) by means of automated and thus principally unrestricted activity.
Their potential to do so cannot only be attributed to the initiators and programmers of social bots. It can also be viewed as the consequence of organizational or individual decisions and actions of various other actors—including legislators, operators of social networks (including management, programmers, etc.) and their users—as well as gaps in the distribution of responsibility among these actors.60 This phenomenon, referred to as the “problem of many hands,”61 can be illustrated by the example of social bots increasing the reach of disinformation: If a false message spread by a social bot is presented to a human or computer-generated user, this is the result of a missing or unsuccessful examination of the content for correctness as well as an algorithmic recommendation by the platform, which itself is based on characteristics of the message as well as the user's own and other users' behavior. If a user likes, comments, or shares this message—regardless of any knowledge about the message being false—he or she will firstly increase its reach and popularity indicators, secondly influence the perception of the message by other users, and thirdly change the basis for algorithm-based operations such as recommendations for other users. The platforms primarily design and use recommendation algorithms to pursue economically motivated goals such as generating user data and increasing the number of interactions62 rather than to promote the dissemination of verified content. This in turn can also be understood as the consequence of a US-influenced values and legal system that, with 47 U.S.C. § 230, exempts platforms from any liability claims and attributions of responsibility for the distribution of third-party content, and on which platforms based in the United States mainly orientated their business models and practices until legal disputes with state authorities of other western countries arose.63
Since the responsibility for collective action cannot be assigned to a single actor64 and the areas in which social bots and their initiators operate are not confined by territorial or national borders,65 and given the networked nature of the measures adopted by the different actors, solutions for these problems can only be developed by taking into account different levels of regulation and the interactions among different actors such as legislators, platform operators, and users.66 In this context, particular attention needs to be paid to the asymmetrical resources, coordination possibilities, and enforcement capacities the different actors have. The options include forms of self-help by users, internal or collective self-organization by companies and sectors, co-regulation, and state interventions.67
Governance options for the three problem areas outlined are presented and discussed below from a STATE, ORGANIZATIONAL, and USER perspective. Potential measures can be classified according to the time of the intervention into preventive and repressive as well as already implemented and conceivable. From a public interest perspective, governance should serve to strengthen benefits and minimize risks.68 In the context of the activities and the influence of social bots, this specifically means that governance options should be used to guarantee and promote fundamental rights such as freedom of expression and unbiased decision-making, and to curb the potential for censorship and repression as well as for manipulation of elections and socially relevant discourses.
For the discussion of state options in dealing with social bots, reference is made to Swiss constitutional law, in particular to the provisions on freedom of expression and formation of opinion (Articles 16 and 34 Cst.), which are consistent with other nation-state provisions in western democracies (e.g., the First Amendment to the Constitution of the United States of America, Article 5 of the German Basic Law, and Article 10 of the European Convention on Human Rights), and can therefore claim generalizability for these provisions.
Governance Options Addressing Illegal Content
In Switzerland, illegal content includes, among other things, statements that make false harmful factual claims, contain discriminatory, violence-glorifying or pornographic content or infringe the personal rights of third parties. STATE: Governance options for this problem area must consider the right to freedom of expression established in Article 16 Cst. paragraph 2 and the conditions for the restriction of fundamental rights formulated in Article 36 Cst.: The fundamental right to freedom of expression, which is concretized in Article 16 Cst. paragraph 2, serves to protect the needs and interests of people, which are of fundamental importance for their personality.69 The article serves two primary functions: First, it should ensure the free exchange of views and perspectives, which is relevant for democratic processes. Second, it enables citizens to express criticism of (perceived) political and social grievances.70 In particular, it also establishes a right of defense against interventions by the state or third parties, especially in connection with censorship or sanctions. A restriction of the fundamental right to freedom of opinion and expression, according to Article 36 Cst., requires a legal basis, is subject to the principle of proportionality and is only possible if it is in the public interest or justified by the protection of fundamental rights of third parties.71 The core essence of the fundamental right must not be subject to any restriction.72 This includes, for example, a prohibition of pre-censorship in the sense of a systematic control of intended expressions of opinion.73 A restriction of the freedom of expression can be justified if the civil and criminal legal protection of the personality of private third parties is affected and in cases of discrimination, defamation, or the deliberate dissemination of false facts.74 If contents posted by a social bot infringe the fundamental rights of third parties or applicable law, the freedom of expression can be restricted. The same repressive state defense mechanisms, for example, regarding liability and injunction, are applied as for other communication channels. Often, however, it is not always possible to distinguish between content that is actually illegal or merely undesirable. Moreover, there are different international regulations. For example, denying the Holocaust is not considered illegitimate in all countries. It should be noted that in the United States, false allegations are generally protected by the First Amendment.75
Since social bots do not (yet) have their own legal personality, the programmer or the initiator is regarded as the owner of the legal interest against which an intervention can be asserted and as the addressee of sanction claims,76 whereby the possibilities for sanctioning also depend in part on international treaties and cooperation with foreign authorities in the prosecution and combating of criminal activities. In the course of technological development in the field of robot technology and AI, an increasingly autonomous behavior of social bots is assumed. In this context, it is questionable whether illegal content distributed by autonomous social bots can still be legally charged to the programmer in the future. In this context, e-personhood is discussed as a new legal entity.77 However, there are no concrete implementations so far.
ORGANIZATIONS: The responsibility of platforms in the distribution of illegal or deliberately false content must also be reflected: Due to their network architecture as well as the algorithmic selection, recommendation and curation of content, platforms not only act as transmitters, but also allow social bots to exploit these algorithmic mechanisms in order to pursue their goals. Accordingly, from a governance perspective, the coresponsibility of platforms for the distribution of content by social bots must also be taken into account. At the level of self-regulation, the operators of platforms such as Twitter or Facebook point out in their usage guidelines78 that the publication and distribution of such content via their networks is prohibited. In order to prevent this, certain platforms—for example, Facebook in connection with terrorist, violence-glorifying, and pornographic content79—check whether content is illegal or violates their guidelines during the upload process by means of algorithmic filters and human editors. Preventive content moderation is, however, not entirely unproblematic from a constitutional point of view, especially since algorithms are used for such tasks that are based on human programming and machine learning and therefore operate neither value-neutral nor objectively.80 This bears a potential risk of systematic bias due to incorrect results,81 particularly if the law also provides for sanctions against platform operators for the distribution of such content.
Repressively, the platforms can remove illegal content and links to such (or even entire profiles), either by order of a court or through a reporting procedure in which users partly assume the work of content moderation. In the latter case, too, the platform operators must ultimately decide whether or not a content infringes the guidelines. Depending on how the reporting procedure is designed, it also opens up room for manipulation by social bots themselves: If, for example, a user account is blocked by reporting an (alleged) violation, bots can be programmed to identify, report, and thus block (influential) accounts that support opposing positions.82 Whereas the detection of illegal content already poses a substantial challenge to platform operators, the challenge is even greater in connection with the dissemination of intentionally false facts. While in practice it is often nearly impossible to draw a clear line between false and true content, it is even more difficult to make a judgment about the intention of an author of a piece of content, all the more if it originates from a social bot, to which no awareness or ability of intentionality can currently (yet) be attributed.83 Preventive as well as repressive content moderation on the part of the platform operators thus harbor a potential risk for the freedom of expression and the free formation of opinion in the form of misjudgments and systematic precensorship. In addition, they also pose a risk to the commercial success of platforms by requiring the use of considerable resources and potentially frustrating users whose content is deleted or accounts (temporarily) are blocked.84
In response to the dissemination of deliberately false content, measures can be implemented that help to promote the quality of information in a social network in general85 and thus indirectly address the harmful potential. These include, for example, visual cues that users can use to orient themselves with regard to the quality and credibility of content, sources and authors or an adaptation of recommendation algorithms in favor of high-quality content.86 Again, it should be noted that such adaptations can be a source of possible bias. Research on the use of warnings in connection with mis- and disinformation have shown that they can reduce the perceived credibility of such content and the probability of sharing it, but unintended effects have also been observed.87 Particularly problematic is the fact that the existence of warning labels can have a detrimental effect on the credibility attribution of other (true) content as well as question the perceived correctness of existing beliefs.88 Furthermore, false content that is not accompanied by a warning is implicitly validated.89 Given these difficulties, it is important from a democratic perspective that governance be based on a common understanding among key stakeholders, what is seen as a real threat to third-party rights and public security.90 In order to meet this objective, it would be conceivable at the co- and self-regulatory level to participate in, support, and finance expert panels as well as bot detection and fact-checking agencies jointly operated by citizens, academics, journalists, platform operators, and state actors.91
As far as USERS are concerned, their behavior also contributes to how content spreads in social networks, for example, by liking, sharing, or reporting content as offensive if a social network has a corresponding feedback function, or by refraining from doing so.92 It is however important to highlight that the options available to users are largely determined by the platforms. The reports of particularly credible users, so-called “Trusted Flaggers,”93 are given privileged treatment by some platforms and the content that has been flagged is removed more quickly. For the average user, however, it is an extremely demanding task to judge whether content is illegal, deliberately misleading, or comes from a reputable source. Such tasks require a certain level of competence in the critical use of sources and content as well as in the functioning of social networks, especially given that activities such as liking and sharing can be classified as independent expressions of opinion and are also subject to possible claims for sanctions.94
In contrast to state actors or platform operators, users do not have an organization and resources that would allow them to act in a goal-oriented and strategic way. Correspondingly, platforms have a major responsibility for and play a particularly important role in how users deal with illegal or deliberately false information disseminated by social bots: in the design of the terms of use, in the transparency of their operations, by equipping users with true and effective options to organize and enforce their interests vis-à-vis the platforms and thus be able to exercise a supervisory and custodian role,95 and in providing the necessary skills in cooperation with and complementary to state institutions such as educational institutions. In this context, voluntary or mandatory tests (e.g., for users exceeding a certain number of followers or influence level), similar to a driving exam, would help to raise the users' awareness of illegal content and untrustworthy sources, inform them about reporting possibilities and thus enable them to use a social network responsibly.96 Finally, users are free to turn away from certain social networks if the quality of the content or the benefits no longer meet their expectations. However, it is often network effects or a lack of equivalent alternatives that make a change difficult.97
Table 1 provides an overview of the governance options to protect against the distribution of illegal content by social bots.
Governance Options Addressing the False Pretense of Human Identity
The freedom of expression guaranteed by Article 16 Cst. applies equally to natural and legal persons and also includes statements made anonymously or using a pseudonym. Under certain conditions, this can serve to protect against reprisals, hostility, or personal disadvantage and can be conducive to the expression of opinions that would otherwise not be voiced.98 Accordingly, this also means that a requirement to communicate under clear name is not intended by the constitution. The validity of the scope of protection is independent of the choice of the communication channel.
STATE: It can therefore be argued that the basic right cannot be denied even to a social bot or to the programmer or initiator of a social bot who uses it, for example, to protect his or her identity.99 This argument could be countered by saying that a social bot is not merely a communication channel that enables anonymous communication, especially when it is deliberately used to manipulate opinions or popularity indicators under the false pretense of human identity.100 Under these circumstances, freedom of expression as a premise and freedom of opinion as a conclusion come into a seemingly irreconcilable conflict. However, no right to restrict freedom of expression can be derived from the mere pretense of human identity. The right of a programmer or user to freely express himself or herself under anonymous conditions must be given higher weight than the users' expectation to interact with humans in social networks.101 This becomes even more evident when one considers the example of a programmer in an authoritarian regime who, for fear of reprisals, uses social bots to spread critical positions under the protection of anonymity. On the one hand, this requirement can contribute to a climate of opinion that is conducive to the free formation of opinion and is characterized by diversity of content and pluralistic interests. At the same time, however, it also conceals the danger that users may be deliberately and unnoticeably deceived, for example, by social bots copying profile information and usernames with slight changes or by imitating usage patterns.102 A potential combination of the need for disclosure of nonhuman profiles and anonymous communication by social bots is offered by bot disclosure legislation, such as that envisaged in the draft of the U.S. Bot Disclosure and Accountability Act of 2019 or the Medienstaatsvertrag der Länder in Germany. These laws impose an obligation to label social bots, but do not prohibit them. This preserves the possibility of using social bots to make anonymous statements. A repressive approach at the state level, on the other hand, is undesirable, as the potential danger to freedom of expression would outweigh the benefits.
ORGANIZATIONS: Implementation of the labeling obligation would require, as mentioned earlier, a very difficult and in particular valid and unambiguous identification of social bots and thus a strong cooperation with operators of social networks or other forums in which social bots can be active. Although the terms of use of Twitter or Facebook, for example, prohibit operating under a false identity, these two platforms do not check the authenticity of an identity when creating a profile.103 However, mandatory verification, for example, by means of an official identity document, would not be entirely uncritical for reasons of data protection and privacy. The verification process would have to be designed to protect the anonymity of users (even in authoritarian regimes), especially with regard to possible repression, but also to ensure that no discrimination in access, for example, due to missing documents, is implemented. A possible solution for this might be a blockchain-based self-sovereign identity approach,104 which leaves the sovereignty over one's own data to the user and allows him or her to decide which service provider gets access to what data.105 For example, a social media platform would simply receive information from a certified digital identity (e-ID) that an applicant is a natural or legal person and whether a possible quota of permitted profiles has already been exhausted. In addition, further elements such as name or picture could be verified at the request of a user, which could be of particular importance for persons in public life. In addition to simply indicating that a profile is verified, this information could further be used to display the ratio between verified and nonverified followers or friends in the case of profiles or, in the case of content, the relationship between verified and nonverified profiles that retweet or like it. This would at the same time increase transparency with regard to popularity indicators. However, it can be assumed that the acceptance and effect of such indications would be based on the perception of their validity, which in turn would have consequences, for example, with regard to the selection and credibility attribution of sources and content, similar to those discussed for warnings in connection with false content.
Authentication of human users can also be done technologically using challenge–response techniques such as captcha tests, although progress in machine learning and AI has resulted in such tests, as currently designed, being increasingly mastered by software.106 Such authentication could be limited, for example, to profiles that prefer to remain anonymous or unverified or where the determined value for the automation probability exceeds a certain value. This would prevent users from having to face such tests permanently. In addition, verified profiles could be assigned more relevance for recommendations, rankings or the identification of emerging issues by means of algorithmic procedures.107 However, it should also be noted in this context that the visibility of opinions and interests, which are anonymous for fear of reprisals, could suffer as a result of this, and that algorithms, again, have the potential for bias due to nonneutral programming and selection decisions.
On the USERS side, the possibilities for dealing with social bots that feign a false identity are limited primarily to detecting them and preventing deception. User behavior contributes to the connections and reach that social bots can have within a network. Depending on the platform characteristics, appropriate profile settings and critical behavior can facilitate, for example, checking requests from unknown or nonverified accounts and, if necessary, cutting connections. Again, this requires a certain competence and literacy in the use of social networks. In this context, operators of social networks play a crucial role in designing the conditions (functions for and assistance in identification, operation, configuration, interaction, etc.) that enable individual users to meet their responsibilities.108 This also applies to teaching the necessary skills in cooperation with and complementary to state institutions such as educational institutions.
Table 2 provides an overview of governance options to protect users from social bots feigning a false human identity.
Governance Options Addressing Automated and Potentially Unlimited Activities
Article 16 Cst. on freedom of expression in general and Article 34 Cst. on free political decision-making in particular do not only guarantee freedom of expression, they also oblige the Swiss STATE to ensure a functioning communication system that enables free and undistorted political opinion-formation and participation.109 If the integrity of an election or vote is at risk, the state is required to intervene. However, such intervention must not lead to a situation in which freedom of expression is no longer possible under anonymous conditions.110 Two potential threats to freedom of expression and opinion-formation can be identified as a result of automated and thus principally unlimited activities of social bots and their (hitherto) nonexistent sense of awareness, which impedes argumentation and understanding based on reflection. On the one hand, the adequate representation of pluralistic interests, on the other hand, an accurate perception of the relevance and popularity of political personalities or positions can be endangered and even promote a spiral of silence.111 A preventive intervention at state level could consist of the legislator imposing restrictions that apply for specific periods of time, actors, and channels. For example, a general prohibition of the use of social bots for parties and other political and social groups on information and opinion platforms could be considered, as envisaged in the US Bot Disclosure and Accountability Act of 2019. This could, for example, be extended to all users during the final weeks before an election or vote and would directly address the increased use of social bots shortly before votes or elections as identified by studies112 and counteract the danger of artificially generated changes of opinion in highly polarized settings.113 Such legislation would again require the cooperation of platform operators and coordination with all involved actors.
In extreme cases, repressive options at state level can go as far as declaring the results of an election or vote invalid.114 However, it should be remembered that while the detection of social bots already presents a complex challenge, quantifying the effectively manipulative influence of social bots on the communication order, the formation of opinions, and the actual outcome of an election or vote is even more difficult. There is also a risk that such an instrument could be misused in the case of undesired election or voting results, especially in authoritarian regimes. Correspondingly, repressive measures aimed at restoring the communication order and thus enabling the free formation of opinion are normally preferable. However, this would first require comprehensive cooperative efforts on the part of the platform operators for the purpose of a continuous monitoring of discourses in terms of content and quantitative indicators. Given the asymmetrical access to data, this task would have to be demanded by state actors from the platform operators and could be supervised, for example, by independent expert panels and bot detection and fact-checking agencies, which have already been discussed earlier. From a democratic perspective, it is important also in this context that the implementation of governance measures reflects cultural and legal traditions and is based on a consensus among the main political and social actors on what is regarded as a threat to the communication order and public security, which circumstances justify an intervention (e.g., exceeding certain thresholds for popularity indicators with the help of the activity of social bots) and in which form such an intervention could take place (e.g., flagging or blocking of profiles and/or content).
ORGANIZATIONS: At the self-regulatory level, platform operators also dispose of preventive technological options with which they could, for example, limit the activities of a profile in order to make it more difficult to use social bots in a scalable and manipulative way. These could be based, among other things, on measuring the frequency of posted messages, likes, friend requests, and so on, per defined time unit and profile or on analyzing the diffusion of content, for example, with regard to reaction time, retweet ratio, to name a few.115 Contents or profiles identified accordingly could then be downgraded or ignored in the algorithmic recommendations and when measuring trends.116 It should be noted, however, that accounts incorrectly identified as bots would also be affected by such measures.
Automated activities of social bots, which are becoming more sophisticated and complex, make it more difficult for USERS to detect them and thus increase the likelihood that they will be deceived and ultimately, unconsciously and unintentionally, become assistants of the initiators or programmers of social bots in pursuing their goals. In this context, the promotion of knowledge and skills in the critical use of (social) media, their content, their (algorithm-based) functioning and their potential to influence opinions and political preferences would be helpful. Further, information from platforms that help to recognize the profiles of social bots or indicate the ratio of social bots within popularity indicators of profiles or content would be of help.117 This would enable users to understand how their actions contribute to the way information is spread on social networks and how social bots can systematically exploit this. It would also empower them to better identify and, where appropriate, report suspicious activity, content, or profiles. Again, the difficulty of validly identifying such profiles should be pointed out, which could affect the acceptance and impact of such indications.
Table 3 provides a summary of governance options that address the potentially unlimited activities of social bots.
Conclusion
The potentially harmful role of social bots in the opinion-formation and decision-making processes before votes, elections, or within controversial debates is increasingly being discussed in public, political, and scientific forums. Critical issues are (1) the dissemination of illegal content and disinformation, (2) the false pretense of human identity, and (3) the potentially unlimited number of communication and networking activities that can deceive about relevance and actual majorities.
The aim of this article was to present and discuss (existing and possible) governance options for addressing these issues. In order to take into account the characteristics of the cross-border and platform-based activities of social bots, (collaborative) options for action by state, organizational, and individual actors were considered.118 For the fact that social networks, which present themselves as facilitators of global networking and open exchange of ideas, have proven to be particularly susceptible to manipulation through the use of social bots and thus tend to contribute more to undermining a normatively desirable exchange,119 can be understood both as a consequence of their emergence, (technological, economic, and networking) functional logic and gaps in the distribution of responsibility for actions in social networks and their consequences.120
In the following, the governance options will be presented in the form of theses, grouped by problem areas:
Illegal content distributed by social bots, which, for example, contains defamation or discrimination, can be subject to the same legal sanctions as content distributed through analogue channels. Respecting the core content of the constitutional provision on freedom of expression the state can only act repressively, never preventively, as this would constitute censorship.121 Platforms, on the other hand, can take both preventive and repressive action against illegal content within the legal framework. Through the use of algorithmic filters and manual checks by platform employees, they can prevent its publication, downgrade it when making recommendations and measuring trends, label it with a quality indication, or remove it completely.122 However, it should be noted that these forms of content selection, moderation, and curation offer room for systematic bias.123 In addition, the assessment of the illegality or the factuality and quality of content is often not possible without doubt and can also vary in different legislations. For this reason, the participation of various stakeholders in the development of selection and weighting rules would be desirable. Platforms also react to notifications from users who report illegal content via the feedback options implemented by platforms and, through their actions, also contribute to whether and how such content spreads.124 Consequently, it is also the duty of the platforms, together with state institutions, to provide users with the skills for a responsible use of social networks with regard to content and sources and the knowledge of the functioning of algorithms in order to reduce the potential for deception.
Since social bots can also serve as an instrument for anonymously sharing opinions and perspectives, a legal prohibition of the anonymous use of social bots should be rejected. However, a governmental or organizational obligation to label bots (as part of bot disclosure legislation) would be possible provided a—so far not realizable—unambiguous identification. Platforms further have the possibility of linking the creation of a user account to the use and verification of a legal or digital identity and to secure activities by means of authentication tests. Beyond mere labeling, such information can also be used to give greater weight to verified profiles in recommendations and to provide users with clues regarding automated agents' participation in online discourse. It cannot be ruled out, however, that such indications may also have unintended consequences in terms of reception, for example, in the selection and credibility attribution of sources and content. Depending on the settings options of a platform, users can minimize the possibility of getting in contact with unknown other users (including social bots).
The provisions of the Swiss Federal Constitution on the protection of freedom of expression also establish a right to protection against a biased online discourse before elections, votes, and in controversial debates due to the potentially unlimited activity of social bots. Therefore, drastic instruments such as a prohibition of bot activities during certain periods, through certain channels or by particularly relevant and resource-rich actors would be possible. Less invasive would be a limitation of activities per user account and defined time unit.
It is important to note, however, that the governance measures described earlier may not only address the respective problem area but can also affect the other challenges at the same time: For example, limiting communication activities also minimizes the damaging potential of illegal content and vice versa. The success of the governance instruments discussed depends essentially on the extent to which the following challenges can be met:
National legislation: Since activities of social bots are not confined by national borders, a national regulation seems somewhat anachronistic. For example, a prohibition of the use of social bots by Swiss actors before elections could possibly be circumvented by social bots programmed by German actors and operated from servers located in the Ukraine. An inter- or supranational solution that unites different cultural and legal traditions (as far as possible and desirable) and different stakeholders and gathers the necessary technical know-how would be desirable in this context.
Balancing of legal interests: The governance of social bots is always a balancing of different legal interests: On the one hand, the right of the individual to spread opinions and perspectives anonymously while protecting his or her identity via a social bot, on the other hand, the right of the public not to be exposed to a biased online discourse. In the choice of governance measures, both of these goods must be taken into account. Furthermore, it must also be kept in mind that governance measures introduced in states with functioning democratic systems should not serve to legitimize and expand the possibilities for censorship and repression in autocratic regimes.125
Unambiguous identification: Governance options of social bots can only work if they can be clearly identified and false positives and negatives can be excluded. Currently, however, identification is associated with considerable uncertainty. This is mainly due to the fact that the boundaries between different profile forms are fluid and identification is like a cat-and-mouse game between programmers of social bots and programmers of recognition software.
Technological development: The regulation of technologies, such as the activities of social bots but also algorithmic filters and detection programs, can only react ex post to technological possibilities and innovations. Future developments, their potential dangers, and the corresponding need for regulation are usually not foreseeable. Therefore, it is not possible to conclusively assess whether the options outlined earlier would also be viable in the future.
Economic maxim of the platforms: The implementation of the governance options is largely dependent on platform operators. This is because legal regulations require technical and organizational implementation by the platform operators and are therefore subject to their willingness to cooperate. Platforms provide the operational framework126 by enabling or preventing the existence of and at the same time providing settings for protection against and for social bots on their networks. A variety of the governance options outlined change the way platforms and users interact, limiting the extent or intensity of communication activities on social networks and thus the basis of the platforms' business model, which is based on user data and their interactions.127 The question as to whether platform operators, for economic considerations, are more likely to adopt a reactive stance in the development and implementation of governance options, or whether they are committed to the ideal they communicate in their marketing activities as promoters of global networking and open exchange of ideas, remains open.
Footnotes
Yücel.
Bessi and Ferrara.
Schäfer, Evert, and Heinrich.
Neudert, Kollanyi, and Howard; Keller and Klinger.
Ferrara.
Howard and Kollanyi.
Rauchfleisch and Vogler.
Stella, Ferrara, and De Domenico.
Hegelich and Janetzko.
Abokhodair, Yoo, and McDonald.
Suárez-Serrato et al.
Broniatowski et al.
Varol et al.
Bessi and Ferrara.
Ross et al.
Ferrara et al.; Woolley; Steinbach; Libertus.
Thompson; Nissenbaum; van de Poel et al.
Woolley.
Rhodes; Puppis.
Franklin and Graesser; Tsvetkova et al.; Howard, Woolley, and Calo.
Gorwa and Guilbeault; Oentaryo et al.; Stieglitz et al., “Do Social Bots Dream of Electric Sheep?”
Gorwa and Guilbeault.
Ferrara et al.; Stieglitz et al., “Do Social Bots Dream of Electric Sheep?”
Ferrara et al.
Moor.
Moor; Mitcham.
Just and Latzer; Guilbeault; Klinger and Svensson.
Hegelich and Thieltges.
Subrahmanian et al.; Chu et al.; Ferrara et al.; Oentaryo et al.; Stieglitz et al., “Do Social Bots (Still).”
Davis et al.; Varol et al..
Davis et al.; Ferrara et al.
Rauchfleisch and Kaiser.
See Botometer FAQ, available at: https://botometer.iuni.iu.edu/#!/faq (accessed June 21, 2020).
Aiello et al.; Ferrara et al.; Yang et al.
Just and Latzer.
Cresci et al.
Porten-Cheé et al.; Ross et al.
Porten-Cheé et al.
Petty and Cacioppo, “Source Factors and the Elaboration”; Petty and Cacioppo, “The Elaboration Likelihood Model.”
DeBono and Harnish.
Messing and Westwood; Knobloch-Westerwick et al.; Yang.
Shao et al.; Haim; Bobkowski.
Porten-Cheé et al.
Noelle-Neumann.
Porten-Cheé et al.; Keller and Klinger.
Hampton et al.
Ross et al.; Cheng, Luo, and Yu.
Papakyriakopoulos, Serrano, and Hegelich; Just and Latzer; Lazer et al.; van Dijck, Poell, and de Waal; Yang et al.
Shao et al.; Vosoughi, Roy, and Aral.
Shao et al.
Pennycook, Cannon, and Rand.
Vosoughi, Roy, and Aral; Varol et al.
Mønsted et al.
Shao et al.
Shao et al.; Ferrara; Stella, Ferrara, and De Domenico; Bastos and Mercea.
Reeves and Nass.
Edwards et al., “Is That a Bot Running the Social Media Feed?”
Ibid.
Edwards et al., “Differences in Perceptions of Communication Quality.”
Thompson; Nissenbaum; van de Poel et al.
Thompson.
van Dijck, Poell, and de Waal.
Jin; van Dijck, Poell, and de Waal; Gillespie, “Platforms are not Intermediaries”; Pasquale, “Platform Neutrality.”
Nissenbaum; van de Poel et al.
Woolley.
Napoli; Helberger, Pierson, and Poell; van Dijck, Poell, and de Waal.
Saurwein, Just, and Latzer; Puppis.
Black; Saurwein, Just, and Latzer.
Rhinow and Schefer.
Kley and Tophinke.
Rhinow and Schefer.
Kley and Tophinke.
Ibid.
Biaggini; Schweizerisches Bundesgericht, “Bundesgerichtsurteil 6B_119/2017”; Sprecher et al.; Schweizerisches Bundesgericht, “Bundesgerichtsurteil 6B_267/2018”; Rhinow and Schefer.
Wood.
Oehmer.
Fanti.
See https://help.twitter.com/en/rules-and-policies/twitter-rules; https://www.facebook.com/legal/terms (accessed June 21, 2020).
See https://newsroom.fb.com/news/2019/05/enforcing-our-community-standards-3/ (accessed June 21, 2020).
Gillespie, “The Relevance of Algorithms”; Gillespie, “Governance of and by Platforms.”
The same risk applies to human content moderators. In addition, they are confronted with highly problematic working conditions. See Gillespie, “Platforms are not Intermediaries.”
Gollmer.
Mitcham.
West.
Napoli.
Clayton et al.; Pasquale, “The Automated Public Sphere.”
Chan et al.; Mena; Walter and Murphy; Walter and Tukachinsky.
Carey et al.; Clayton et al.
Pennycook et al.
Helberger, Pierson, and Poell.
Wardle and Derakhshan.
Helberger, Pierson, and Poell.
In Switzerland, the Federal Office of Police fedpol holds this status. See: Bundesrat.
Koltay.
Gillespie, Custodians of the Internet.
Examples of such voluntary offers are “Troll Factory” provided by Finnish public service broadcasting company YLE that teaches how information operations work on social media (see https://trollfactory.yle.fi/, accessed June 21, 2020) and “Interland,” an initiative of Google, which wants to make children familiar with potential dangers on the Internet (see https://beinternetawesome.withgoogle.com, accessed June 21, 2020).
Barwise and Watkins; Saurwein, Just, and Latzer.
Milker.
Steinbach.
Milker.
Oehmer.
Yang et al.
Verification for persons of public interest can, for example on Twitter or Facebook, be performed at a later date: https://help.twitter.com/en/managing-your-account/about-twitter-verified-accounts; https://www.facebook.com/help/1288173394636262?helpref=faq_content (accessed June 21, 2020).
After the e-residency was introduced in Estonia, the implementation of e-ID solutions is being discussed in several countries, including Switzerland, see https://www.bj.admin.ch/bj/de/home/staat/gesetzgebung/e-id.html (accessed June 21, 2020).
Mühle et al.; Sullivan and Burger.
Stark et al.
Lazer et al.; Grinberg et al.
Helberger, Pierson, and Poell.
Biaggini.
Oehmer.
Noelle-Neumann; Porten-Cheé et al.; Ross et al.; Keller and Klinger; Oehmer.
Ferrara; Stella, Ferrara, and De Domenico; Bastos and Mercea.
Ross et al.; Cheng, Luo, and Yu.
The case is known in Switzerland in which incomplete and nontransparent information from the Federal Council led the Federal Supreme Court to annul the result of the 2016 referendum on the popular initiative “ Für Ehe und Familie – gegen die Heiratsstrafe [For marriage and family - against the marriage penalty].” The Federal Supreme Court justified the decision on the grounds that the incorrect information provided by the Federal Council, which was disseminated by political players and mass media, violated the right of voters to objective and transparent information, with the consequence that they were unable to form and express their opinion correctly. Given the tight outcome (50.8% no votes), the result of the vote could have been different. See https://www.bger.ch/files/live/sites/bger/files/pdf/de/1C_315_2018_yyyy_mm_dd_T_d_13_11_34.pdf (accessed June 21, 2020). If erroneous communication by the Federal Council can lead to the annulment of a voting result, this seems imaginable as well in the case of an online discourse biased by social bots in the run-up to an election or vote.
Shao et al.; Mønsted et al.
Lazer et al.
Ibid.
Gillespie, “Governance of and by Platforms”; Helberger, Pierson, and Poell; Napoli; Puppis; Saurwein, Just, and Latzer.
Tucker et al.; Deibert; Jarren.
Gillespie, “Governance of and by Platforms”; van Dijck, Poell, and de Waal; van de Poel et al.
Kley and Tophinke.
Lazer et al.; Pasquale, “The Automated Public Sphere.”
Gillespie, “The Relevance of Algorithms”; Gillespie, “Governance of and by Platforms.”
Helberger, Pierson, and Poell.
Deibert; Tucker et al.
Gillespie, “Governance of and by Platforms.”; Jarren.
van Dijck and Poell; Jarren.