Abstract
The subject of this article is the current difficult relationship between the evolution of communication technologies and the implications that its daily use can have on the invasion of an individual's privacy. Many are the science fiction audiovisual narratives which approach this evolution from the dystopian point of view with a technophobic character, focusing on the individual effects. But, in reality, this is something that has become clear in the case of Google Glass, where the fear of loss of intimacy was what caused social rejection in its use in everyday life, although not in certain fields of labor and training.
Historically, technology, communicative phenomena, and the legislation that protects them have been closely linked. It's true that this has occurred differently in each country, but with some common elements. Examples of comparative studies of this type are found in Rodríguez-Campra1 García (2009), who reviews the regulations of Europe and the United States in relation to the protection of minors regarding the content and advertising insertion in the audiovisual media. Rodríguez-Campra analyzed 12 European countries and the United States, and identified three different levels of countries in relation to the protection of the child against advertising: Denmark and the Netherlands would be the most protective while Ireland, Portugal, Sweden, and United Kingdom develop less regulation. The study also concludes that the United States is the most permissive country when delegating advertising to the television networks themselves. However, they all have bodies that regulate, advise, and sanction and all of them agree to that, regardless of who regulates it.
In 1789, The First Amendment of the Constitution of the United States already referred to explicitly forbidding the exercising of any prohibition on the freedom of the press.2 In Europe, since the sixteenth century, censorship would be exercised as Sáiz3 points out: firstly due to political issues, but also for religious ones, and during the seventeenth century strictly because of political reasons linked to the English revolution.4 The best known moment may be The Declaration of the Rights of Man and of the Citizen also of 1789, which, being the child of The French Revolution, proclaimed the “free communication of thought and opinions” as “one of Man's most precious rights.”5
On the other hand, the historical relationship between communication and ethics has gone in a direction more closely linked toward a profession's performance (deontological) than to the use the public could make of that which was communicated. But technology has long since been put in the hands of the consumers (and not just in those of professionals), turning them into information producers. It is probably the Chicago Tribune newspaper which was the pioneer in launching an integral version of an online newspaper (1992). And from then onwards the online press would involve its public, first through surveys to determine the topics of interest, and then through the direct involvement of the citizens.6 And while in 1982 the USA Today was the first newspaper to apply the service journalism formula (thinking about the why before the what). OhMyNews.com (a Korean newspaper) was the first to employ this concept of citizen journalism in a radical way related to technologies7 or to social journalism.8 From there to the concept of “prosumer” was just a step away.9
Technology has effectively allowed production control and ownership of the news (which was traditionally concentrated in professionals), to pass to the citizen; giving rise to the “social logic” to which Hernandez Serrano and others refer, surpassing the previous “computational logic” (computer related) and “communicative logic” (media related). In this new social logic, the news ownership (journalists or citizens?) is questioned; it raises new ways of relating to the news and a proactive role of the audience. Perhaps this reflects a utopia more than a reality. These changes, in which the content control is passed over to the users, are a new framework for the legislative and ethical sphere: company control and information–professionals–work is lost and it is necessary find it within the public. And this is so because the “prosumer” (who asks for permission to publish) has already passed the “prodesigner” (who takes charge in the initiative to access, select, produce, and distribute).10 From the data of the project “The Pew Internet and American Life,” the tendency to prodesign on the part of the youngest is observed. Due to all this, necessarily (and more than ever) there arises a “self-preservation instinct” necessary for the coexistence of freedom of expression–technology–communication in a place where legislation is lagging behind and ethics could be absent.
Technology in Communication: The Past That is and the Present That is Coming
On June 27, 2012, within the V Congress I/O, Google announced at the Moscone Center in San Francisco their Glass Project, a “futuristic technology” that was already beginning to be filtered into the world from April of that year.11 Hand in hand with the GoogleX12 laboratory, the project proposed the development of augmented reality glasses capable of connecting the individual with information beyond their own environment. The Glass Explorer Edition was put on sale in 2013 for developers and in 2014 for consumers (United States). In 2015, it stopped being commercialized. From individual failure (most likely personal) it has turned into a new proposal for the professional field (Glass at work program) with the Glass Enterprise Edition (launched in July 2017) whose applications seem to be unlimited.
It seems reasonable to think that the failure of the Glass Explorer Edition, aimed at the private user's daily life, could be related to invasions of privacy. Jason Hong points out that historically there has been a rejection of different technologies: from the phone to the Ubiquitous Computing Project itself, which proposed a future “in which one day computing, communication, and sensing would be enmeshed in the everyday world, and could be used to seamlessly support us in our daily lives.”13 The idea is not from Hong, but rather from Mark Weiser,14 who in 1991 already “dreamed of a World in which users would enjoy the service which would be supplied automatically and transparently.”15 The researcher at the Computer Science Laboratory at Xerox Palo Alto Research Center defined Ubiquitous Computing as the computer that went unnoticed. Today we talk about the “Internet of things” (IoT16) and wearable products, but the concept is the same and the problems are similar.17 In fact, culture and technology have been distant and controversial elements throughout history. Postman put forward three types of societies18: “tool-using cultures, technocracies, and technopolies.”19 This submission of all forms of cultural life to the sovereignty of technology is, as Aibar20 reminds us, the paradigm posed by Postman as a possible form of existence. There are, in fact, two great perspectives: that of those who consider technology as a culture conditioner (Postman's argument) and those who see culture as a technological development obstacle (Ogburn “cultural lag”21). In any case, these deterministic visions gave way to the constructive process, in which the same society would be involved in the technologies' generation (or rejection), and not simply in a technology-driven evolution as an isolated element. “Society and technology [says Aibar] appear as a seamless fabric.”
It may seem anecdotal when Fischer talks about the reluctance to have a (landline) phone at home due to the fear of “solicitors, purveyors of inferior music, eavesdropping operators, and even wire transmitted germs,”22 but the anecdote may be the same as we experience today facing the security and privacy problems as our information transits the net. Or perhaps not!23
Technophobic Dystopias: Concept and Context
What has changed regarding the reluctance about exploring the origin of the telephone? It is estimated that in 2020 “the number of connected devices is estimated to grow exponentially to 50 billion”24 and this growth will come, fundamentally, from everyday appliances (cars, refrigerators, lights, etc.). Taken to the extreme, this could place us in a scenario in which we could talk about “technophobic dystopias,” where technology would lead us to a situation that we really do not want to reach. It seems that is what happened with the Glass project.
To understand the concept, it is convenient to refer individually to each of the parts: on the one hand, the concept of dystopia and on the other, the concept of technophobia. Separately or jointly, neither concept is new. In a very simplified way, we could say that dystopia, a very frequent term today, is the dark opposite of utopia. Utopia, Thomas More's text published in 151625 is still considered as one of the main references, although not the only one, when it comes to addressing the issue. From More's texts, but also from the approaches of authors such as Plato (in his Republic), Francis Bacon, or Bellamy, different conceptions of utopia are generated, but there are also a countless number of critical approaches.
More coined the term “Utopia” based on the Greek words “ou” and “topos,” translated as “no place” or nowhere, in the English translation of the sixteenth century, nowhere or “there is no such place,” as Quevedo would translate it into Spanish in the seventeenth century. According to Javier Muguerza, “More allowed himself to play with the possibility of deriving the initial “U” from the Greek prefix “EU,” with which—more than “no place”—“utopia,” would come to mean “a good place” or place worthy of becoming a reality. As a consequence, it is such a “utopic vision” of utopia that seems to have entered into crisis in recent times.
Following López Keller “[If Neusüss's definition of utopia spoke of a ‘dream of a just and true order of life’], now … we find its negative: it is not a dream, but a reality … an immediate reality. And it is not a true and just way of life, but of its opposites, false and unjust. It is no longer the ideal that is proposed as a model to be reached, but the undesirable reality that is seen as possible or even probable. Optimism has given way to pessimism …”
For López Keller,26 “Dystopia, then, is not a set of prejudices, feelings or ideas with regard to certain aspects of a utopian society…. Dystopia or negative utopia is characterized fundamentally by the aspect of denouncing possible hypothetical pernicious developments of today's society. In this sense, it is much more anchored in the present than classical utopias; it does not originate from reason or moral principles to develop an ideal model, but rather, it deduces a future nightmare world from the extrapolation of present realities.”
From Social Dystopias to Individual Dystopias
From Imagined Examples to Reality
The technology applied in communication has always been conceived and planned based on these two aspects: the one that demands a better world27 and the one that involves a risk.28 From science and technology artists have proposed future worlds, sometimes better (sometimes more discouraging) than the present ones. Jules Verne projected (and not just imagined, as is often thought) the technological advances that began in the nineteenth century. Indeed, dystopias are a projection toward the future. And as a general rule, dystopia usually has a social dimension, meaning that it affects all individuals, or at least a large number of them, manifesting itself in many cases in totalitarian or undesirable regimes, usually arising from a violent situation of human origin (wars, riots, extreme inequalities, etc.) or produced by natural phenomena that give rise to postapocalyptic scenarios. This social vision of dystopias is consistent with the concept of network society that many authors have systematized29 and others have interpreted.30 It is in these technological–communicative–social contexts, where law and order begin to be diluted. Legislation, norms, and self-regulations try to adapt (always late) to these new situations. It has always been a concern from the last century31 to the current32 and now.33
For example, in Europe, the relationship between personal rights and the Internet (the right to be forgotten, limitations on personal data processing and collection, the existence of data protection national authorities, and of penal and economic regulations) has been confronted from various fronts of action around the Internet.34 In Latin America, there are some countries which have shown particular concern about information technology risks: México (Federal Telecommunications and Broadcasting law and the Federal Law on Protection of Personal Data Held by Individuals), Peru (which in its Constitution makes an explicit reference to computer services and its repercussion in the sphere of personal and family privacy35), or Venezuela (which also includes the right to the protection of honor, private life, privacy, self-image, confidentiality and reputation, and mentions the necessary limits in the use of information technology36). These norms, in addition, can change and update very frequently (e.g., case of Mexico). On the other hand, in the United States the origin of action is fundamentally economic (with all its derivations) from the Federal Trade Commission.37 In any case, and according to García González, “the right to privacy has gone from being a negative freedom—that is, a freedom lent to individualism that demands respect for others; that is, a defense right—to a positive freedom where the individual has the power to control all that information that is relevant to him or her.”38 The social dimension of the binomial technology–communication and its legal and moral implications is under way, probably in such constant evolution that, like the society-legislation relationship itself, it will never be solved. In fact, globalization itself is not a new phenomenon at all, but it's as old as ideas, religions, and their expansion.39
However, here we are interested in reflecting the personal aspect of our globalized world which is of interest from two main dimensions: from the prodesigner of information and from the individual who, without being even necessarily consuming, is an actor in his or her world. The Glass project came about for the personal sphere, and perhaps that's precisely why it failed. It was a project that had been previously imagined (because we project from the technology that we have and toward what we suppose we can direct ourselves) and it went toward dystopian (not utopian) worlds, which was why Project Glass was halted. Indeed, the reality of Glass was, to a large extent, raised in fiction. Here we consider three examples of audiovisual fiction that, exceptionally, present certain particular aspects that have drawn our attention. These are dystopian narratives that can be classified within the framework of science fiction, and share the peculiarity of not being framed, as is often the case, in the cyberpunk current whose stories usually have in common the proposing of visions of a future in which the (foreseeable) technologies development generates societies in which the individual has no place, hypertechnic megacities, where the city disappears as a universe of existence, a paradigm of the nonplaces analyzed by Marc Augé.40
Precisely the nonexistence of this social component—associated or not with the technological aspect—of dystopias is what has determined the choice of the stories we mentioned. In all of them, situated in a future very near to our present day, or at least very similar, the social field has been eliminated to focus fundamentally on the effects that technologies produce in the individual sphere.
It is precisely that future with date unknown, yet so close and so similar that makes these stories disturbing, and there are only the technologies that show us what allows us to differentiate the time of the story from our own. And these are not friendly technologies, or at least their use made them turn into something that generated fear and concern in us.
Perhaps nowadays, in a society so accustomed to Big Brother and other formats of similar realities, the idea of Live Death41 is no longer as surprising as it was at the time, when it was really a science fiction story, that took place in an undefined time, but that did not present differences with the moment of its creation nor with the scenarios, costumes, vehicles, or technologies, except in the aspect which interests us: an individual who decides to become a camera through a cerebral implant that allows him to retransmit everything he sees to a television control, including the most private moments of his own life, but above all the path to the protagonist's announced death.
The important ethical questions raised by the idea of following a television program of someone who is going to die—in a society where death is no longer present—were almost unimportant, however, if they are compared to the fact that someone receives an implant to allow the retransmission of his life and death without the subject being aware of it.
It is an early approach to the ethical questions regarding intimacy that have given so much to talk about from these television formats actual broadcasts, aggravated in the film by the fact that the subject does not know that he is being recorded (although he has given consent to this, he also has fled from it) and that everything is transmitted to a country that is paralyzed by watching his physical degradation on the screen. We find ourselves facing a reality that would not have existed without the program, a reality built by and for a television channel and by an unscrupulous TV producer and thanks to a human camera. But it is an imposture, a simulation of reality which emotionally strikes us as spectators when we know that the protagonist is not really sick and that all her symptoms are provoked precisely to obtain a realistic broadcast, something that she also is unaware of. It is the situation taken to the limit which imposes ethical decisions: the camera's intentional blindness, horrified by her involvement in the use of a technology which destroys everything, and the protagonist's suicide as the only way to end an immoral practice, revealing that technology's perversion which leaves the producer without a program and confronting his actions.
We take two other examples of audiovisual narrations that have an impact on the possible ethical consequences of the use of certain technologies, this time by Black Mirror, a contemporary series known precisely for its technophobic approaches. There are two episodes where the use of these technologies is done consciously by the characters; in fact they are intended to be an aid in their daily lives and, nevertheless, we see how these end up affecting them in a negative way and again raising ethical questions. It is not the tool itself but the use we give to it.
In the episode of the first season of Black Mirror, The Entire History of You,42 we are faced with a science fiction story where at some date in the future there's a society, quite similar to ours, with a slightly retro air in cars and house designs, although with more advanced technology. Practically all of its members, at least those belonging to the upper middle class are those who appear in the story, wearing a memory implant in their necks which allows them to record their entire lives, store it, and reproduce it, only for themselves using their eyes, or for others using other external devices. It seems to be a useful tool that allows them to remember and review how many times they wanted to watch a specific situation, to analyze it, and look for all the possible aspects that at the time of the events which might have been more or less unnoticed, such as body movements, gestures, or intonations when speaking. But it is a personal decision to watch those recordings alone or share them with others. The ethical question appears again on the scene when analyzing those images becomes an obsession and it reaches the point of forcing someone to share what is purely personal, or forcing them to show images/memories that belong to the most intimate dimension of the other individual, even to employ violence to impose the deletion of another's memories. The result, in this case, is again very negative for the protagonist, since his actions lead him to lose everything that he cared about: family, work, friends, and so on.
In another of the Black Mirror episodes, Arkangel,43 an overprotective mother, places a tracking implant, which later can't be removed, in her young daughter to protect her. The implant tells the mother where her little one is and allows the mother to see everything the girl sees; and it also gives other information, such as her state of health and things that frighten her, which leads the mother to take absolute control of her daughter's life, from giving her food supplements to activating a filter which eliminates the images which cause negative feelings in her. All of this is done from a tablet working as a remote control panel. As a result, the daughter grows up in a kind of wonderful world without any danger parallel to reality, in which she does not know blood, violence or sex, and in which her own schoolmates avoid her since she works as an involuntary spy making it able for her mother to see everything she sees. Yet one more time, the ethical questions of the tool's use are fully manifested when the daughter is an adolescent and the mother, after having abandoned her parental control tablet, takes it up anew, and spies on her daughter again and interferes with the child's life, but with the added fact that the daughter is aware of the device, however, the adolescent does not know that her mother has reconnected to it. Once again, we see how a tool, used in this case with protective intent by an obsessive mother, serves to control and alter the life of an individual who does not know that he or she is being watched and manipulated and who eventually reacts violently upon its discovery.
The fiction shown is close to reality. Technology does not permit everything today, but it does allow part of what it proposes. The Glass Explorer Edition precisely raised the question of privacy among its premises. Because unlike a tweet, the Google Glass tried to become that prolongation (never more real) which communication theorists predicted of technology for the human being.44 An extensive 2013 Charles Arthus article in The Guardian reflected on some of the issues that the Glass project45 began to raise. Arthus considered the questions: “Can a child properly consent to filming or being filmed? Is an adult, who happens to be visible in a camera's peripheral vision in a bar, consenting? And who owns—and what happens to—that data?” And Oliver Stokes (Principal of Design and Innovation at PDD) also said “The idea that you could inadvertently become part of somebody else's data collection—that could be quite alarming. And Google has become the company which knows where you are and what you're looking for. Now it's going to be able to compute what it is you're looking at.” Some of these issues have been overcome by the rules (at least in certain countries), but other uncertainties could arise today, and in fact the project has been reoriented to the professional field and no longer to the general public.
Glass and Privacy: A Really Unexpected Reaction?
Probably we are accustomed to the lack of privacy (in the communicative field) and it is the institutions (more than the individuals) that watch over it. Google Buzz failed due to pressure from civil organizations and governments. And before the Glass project, 37 data protection authorities sent Google a joint letter to ask for explanations.46 Users, meanwhile, speak with their attitude: in a survey conducted by Toluna, 72 percent of Americans said that “I would not use Google Glass for privacy reasons.”47 But the question is not really this, but rather, must I accept that they see me (that is to say, that they record me) without my consent.
The development of technology connected to information has evolved in unexpected directions. As a result, different cultures have developed different approaches to regulatory needs. The right to privacy and data protection in much of Europe has been based on seeking the consent of the interested party when it comes to protecting them, while in some Latin American countries it has been treated as a fundamental right.48 In 1972, Batlle defined the right to privacy as that right of the individual to solitude and “to have a reserved sphere in which to develop his/her life without the indiscretion of others having access to it.”49 The concept evolved from Stuart Mill (who identified him with absolute independence50) to Warren and Brandeis (as protection of private and domestic life). But technology has led to an understanding of intimacy in a different way, overcoming the barriers of the merely domestic “to assume progressively a public and collective significance.”51 And this is where Postman's statement of 1993 takes on a real meaning: “information without regulation can be lethal,”52 because privacy means controlling the relevant information on each one of us.
This was raised by Google when launching its Glass Explorer Edition in 2013, but it did not sufficiently foresee the impact at another level: the nonintegration of technology in a wearable context was decisive. Here the business network (and, again, not so much the citizens) also acted, developing products such as Aircrack-NG, a software that prevented the operation of Glass (the so-called Glasshole53). Google then reacted and included a series of recommendations for the users of its Glasses, which included respect for other's privacy.54
But Stokes goes even further, and argues that in addition to the Glass project privacy problems it creates a problem of interpersonal relationship: “The way we communicate with others through body language and how we want others to perceive us have always been strong drivers for humans, and therefore will be directly impacted by products such as Glass.”55 And this is another possible cause of its failure. Intimacy is not only altered by what someone does, but by what they might do. Thus Google Glasses intended to be a tool that, like a smart watch, complemented or even exceeded (for certain uses) the benefits of a smartphone. But, unlike one or the other, the physical intrusion that implies its use (placed before the user's eyes, explicitly threatening the privacy of other individuals) did not allow its generalization. However, it is a curious reaction (perhaps like Fischer's quote about the beginnings of telephony) because such privacy is only raised by Glass when the camera recording is activated (and this is evidenced by a light in the surroundings, which warns of it).
For the time being, Glass has had to give a necessary turn toward the Work World, as it was already functioning from the time of the Explorer version. The medical and training field is the one which more quickly adopted this technology,56 but there are many other fields such as biology,57 disaster management,58 expertise, or manufacturing processes where Glass is making inroads.
Glass is the most invasive communication technology so far (at least in the current and global cultural context in which it moved). We live together with other similar technologies that are not strange to us, some are regulated and others are not: from surveillance cameras, mobile phones (that record or take pictures as does Glass), or recording cameras installed in cars. In an even more delicate area, that of children, the Internet of toys has also demonstrated a reticence toward privacy, but it continues to coexist and develop.59
Conclusions
Technology and communication are interrelated with certain rights such as those of honor, privacy, or personal information. Their coexistence has been anticipated by the projection of situations to which society would not like to arrive. These dystopias can only be imagined, and of these we have some distant examples and of others which are very close, through audiovisual fiction. Some of them present these dystopias from a clearly technophobic approach from an individual (and not social) perspective.
Normally, worrying and technophobia are often associated with social issues that affect our privacy: cameras that record all our movements, facial recognition devices, and Big Data that provides data of our entire lives. However, the examples that have been alluded to (La mort en direct, from 1979 and the episodes The Entire History of You and Arkangel, both from Black Mirror, from 2011 and 2017, respectively) have an impact on a particular aspect, which is the control of one individual by another. We are no longer talking about more or less abstract entities that watch us and know everything about us, such as the State or the Government or corporations and companies, but rather about specific people who provoke situations that would not have occurred without their intervention and (bad) use of those technologies. The ethical aspect of the subject is in the cancellation of the free will of the affected subjects, in how the existence of these technologies affects the life of someone in a clearly intentional way.
The most controversial current communication technologies (Glass) do not reach the extremes of fiction, but take the technology–communication–privacy triangle (and its moral drifts) to areas in which fear of loss (in this case, loss of privacy) lead it to failure. If in the examples of fiction commented here there is, in a certain way, a real punishment, almost in the form of poetic justice, for those who transgress the ethical limits in the use of these technologies (everyone loses what they most wanted: their sight, their work, family, their daughter, and almost life itself …) in Glass, the punishment has its origin in society, which when facing the dangerous individual who makes bad use of technology, isolates and rejects the Glass (glasshole). And like fiction, Google Glass is a real example of technophobic dystopia in which it deals with denouncing possible negative aspects of today's society, and what scares us is that we increasingly see that world of a technological nightmare, where we suspect that the technologies and their use are very likely to escape our control. And contrary to what usually happens, in this case it is not the law that stops it, but rather the society itself.
Footnotes
Rodríguez Campra García.
Bills of Rights, Congress of the United States. First Amendment. 1789. “Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.”
Sáiz.
Barrera.
Déclaration des Droits de l'Homme et du Citoyen. Asamblea Nacional constituyente francesa. 1789. Art. 11. “La libre communication des pensées et des opinions est un des droits les plus précieux de l'homme. Tout citoyen peut donc parler, écrire, imprimer librement; sauf à répondre de l'abus de cette liberté, dans les cas déterminés par la loi.”
Meso Ayerdi, 7.
Chung and Nah.
Gallego and Luengo.
Hernández Serrano et al., 25.
Ibid.
Stone.
Hong, 10.
Weiser, 82.
Ochrymowicz.
Li, 154
Airehrour, Gutierrez, and Ray, 203.
Postman.
A tool is any utensil, sophisticated or not, in the daily life of a culture. As for technocracies, in Postman's view Francis Bacon was the first to see clearly the connection between science and the improvement of the human condition (Postman, 35).
Aibar.
Ogburn.
Fischer.
Hong.
Airehrour, Gutierrez, and Ray, 205.
Moro.
López Keller.
Jung.
Livingstone.
Castells, La sociedad red.
Bell.
Westin.
Solove; Svantesson and Clarke; Nissenbaum.
Paridac.Breznitz and Palermo.
The concern is not recent: in 1968 the Assembly of the Council of Europe drafted Resolution 65/509/EC on “Human Rights and new scientific and technical achievements” from the Advisory Council of the Council of Europe in 1967 the Council of Europe creates an environment for information technologies and their implications for the rights of the individual.
Political Constitution of Peru of 1993. Art. 2.6: “Everyone has the right: That computer services, computerized or not, public or private, do not provide information that affects personal and family privacy.” Art. 2.7: “To honor and good reputation, to personal and family intimacy as well as to one's own voice and image.”
Constitution of the Bolivarian Republic of Venezuela. Art. 60: “Everyone has the right to the protection of their honor, privacy, self-image, confidentiality and reputation. The law will limit the use of information technology to guarantee the honor and personal and family privacy of citizens and the full exercise of their rights.”
Privacy, Identity and Security are the three axes on which the policy of the Federal Trade Commission revolves, fundamentally from an economic perspective, although it includes other aspects such as Computer Security for Kids. https://www.consumer.ftc.gov/topics/privacy-identity-online-security.
García González.
Tubella.
Augé.
La mort en direct, Bertrand Tavernier, 1979.
Jesse Armstrong, Black Mirror. The Entire History of You, #1x03, Netflix, 2011.
Jodie Foster, Black Mirror. Arkangel, #4x03, Netflix, 2017.
See, for example, McLuhan's classic approaches.
Arthur.
EFE.
EuropaPress.
García González, 750.
Batlle Sales.
For more details on the evolution of the concept of privacy, see the text by García González.
García González, 760.
Postman.
Word game: “glass” and “asshole.”
Under the heading of “DON'TS” Goggle included: “Be creepy or rude (aka, to “Glasshole”). Respect others' privacy and if they have questions about Glass do not get snappy. Be polite and explain what Glass does and remember, a quick demo can go a long way. In places where cell phone cameras are not allowed, the same rules will apply to Glass. If you're asked to turn your phone off, turn off Glass as well. Breaking the rules or being rude will not get businesses excited about Glass and will ruin it for other Explorers.” In https://sites.google.com/site/glasscomms/glass-explorers.
Stokes.
Glass has been used prominently in surgery, but also in other medical fields such as diagnosis. A review of 852 medical applications of Glass outside the scope of surgery can be found in Dougherty and Badawy.
Paxton, Rodriguez, and Dale.
Carenzo et al. They refer to it in their article, “Disaster Medicine through Google Glass.”
Mascheroni and Holloway.