Abstract

The emergence of generative artificial intelligence (AI), exemplified by models like ChatGPT, presents both opportunities and challenges. As these technologies become increasingly integrated into various aspects of society, the need for a harmonized legal framework to address the associated risks becomes crucial. This article presents a comprehensive analysis of the disruptive impact of generative AI, the legal risks of AI-generated content, and the governance strategies needed to strike a balance between innovation and regulation. Employing a three-pronged methodology—literature review, doctrinal legal analysis, and case study integration—the study examines the current legal landscape; synthesizes scholarly works on the technological, ethical, and socioeconomic implications of generative AI; and illustrates practical challenges through real-world case studies. The article assesses the strengths and limitations of US governance strategies for AI and proposes a harmonized legal framework emphasizing international collaboration, proactive legislation, and the establishment of a dedicated regulatory body. By engaging diverse stakeholders and identifying critical gaps in current research, the study contributes to the development of a legal framework that upholds ethical principles, protects individual rights, and fosters responsible innovation in the age of generative AI.

Introduction

Previous research has established that generative artificial intelligence (AI) encompasses technologies that produce various forms of content, such as text, images, sounds, videos, and codes, based on algorithms, models, and rules.1 It operates within an unsupervised or partially supervised machine learning framework, utilizing statistics and probability to generate artificial artifacts.2 Unlike previous AI systems that primarily distilled information, generative AI is capable of creating artificial content and learning patterns and distributions by analyzing training examples containing existing digital content. One well-known example of generative AI is ChatGPT, developed by OpenAI. OpenAI has introduced several models, including GPT, GPT-2, GPT-3, GPT-4, and image pre-training iGPT, gaining widespread popularity worldwide. Other tech companies like DeepMind, Google, SenseTime, and Ali have also developed their own language models and invested in large models.3

Generative AI operates at the forefront of technological innovation, transforming machines from passive tools into active agents in creative endeavors.4 Taddeo et al. provide a current assessment of AI’s capabilities, whereas Mazzon et al.’s work offers insights into its future potential. This transformative impact crosses multiple industries, such as healthcare, where AI assists in complex diagnostics, and the creative arts, challenging traditional concepts of authorship and creativity. In the legal field, existing statutes are adapting to address these novel issues, a phenomenon explored in depth by legal scholars like Lim (2018), who discusses the evolving nature of intellectual property law in the age of AI.

The disruptive nature of generative AI requires a critical examination of the current legal frameworks governing this technology. In the United States, a leader in AI innovation, governance strategies are in a state of flux, adapting to the rapid pace of technological change. This article explains these governance strategies, highlighting their strengths and identifying the gaps that could compromise legal accountability and ethical governance in AI. Although this article draws heavily from the governance strategies in the United States, the proposition of a robust and harmonized legal framework is grounded in the broader principles of international human rights law and the emerging global consensus on the need for AI governance. The theoretical foundations for this framework are rooted in the works of scholars such as Taddeo and Floridi, who argue for an ethical approach to AI governance that prioritizes human rights, transparency, and accountability.5

Thus, our work extends beyond just identifying the shortcomings in the US legal frameworks; it advocates for a harmonized, proactive approach to AI governance. Recognizing that technology transcends national boundaries, this article emphasizes the need for international collaboration to establish standards that uphold human rights and ensure equitable economic development. The work of sociologist Eubanks provides valuable insights into the social implications of AI, particularly in terms of equity and justice.6

Our research includes three key components: a literature review, a doctrinal legal analysis, and an integration of case studies, ensuring a thorough examination of generative AI from legal, academic, and practical perspectives. By employing this multifaceted methodological approach, our study offers a comprehensive view of generative AI, combining legal rigor with academic depth and practical insights.

This article is structured as follows: First it offers a comprehensive literature review, covering the technological aspects, ethical considerations, socioeconomic implications, and governance strategies for generative AI. The next section explores the legal review and analysis, examining the legal risks associated with AI-generated content and US governance strategies for AI. Following this, two case studies are presented to illustrate the practical challenges and regulatory issues surrounding generative AI. The discussion then shifts to the disruptive impact of generative AI, the relationship between “disruptive innovation” and “experimental regulation,” gaps in current AI legislation, the intersection of AI with ethical and social norms, and the need for a harmonized legal framework and engagement with industry and civil society and then the article is finally concluded.

Literature Review

Technological Aspects of Generative AI

Generative AI encompasses technologies that leverage models, often deep learning neural networks, to generate information patterns similar to the data they are trained on.7 Examples include text, images, voice, and videos. This form of AI significantly deviates from “narrow” or “applied” AI—systems that might excel at pattern recognition or data analysis but do not create new content.8 The rapid advancements in generative AI have been driven by the development of sophisticated algorithms and models, such as generative adversarial networks (GANs), variational autoencoders (VAEs), and transformer-based models like GPT-3.9 These technologies have enabled the creation of highly realistic and diverse content, from text and images to sound and video.10

Previous works highlight the unsupervised or partially supervised nature of generative AI, which allows these systems to learn patterns and distributions from vast amounts of training data.11 This has led to the development of large language models like ChatGPT, which can generate humanlike text and engage in context-aware conversations.12

Researchers have also explored the scalability and performance of generative AI systems, addressing challenges such as mode collapse and training stability.13 Advances in computational resources and optimization techniques have further enabled the training of larger and more complex generative models.14

Ethical Considerations in Generative AI

The ethical implications of generative AI have been a major focus of scholarly discourse. Researchers have raised concerns about the potential for algorithmic bias and discrimination, as AI systems may perpetuate or amplify societal biases present in the training data.15 The lack of transparency and explainability in generative AI models, often referred to as the “black box” problem, has also been identified as a significant ethical challenge.16

Privacy and data protection are other critical ethical considerations in generative AI. The use of personal data for training AI models without proper consent or safeguards can lead to privacy violations and the misuse of sensitive information.17 Additionally, the generation of fake content, such as deepfakes, raises ethical concerns about the potential for manipulation and deception.18

Scholars have also explored the ethical frameworks and principles that should guide the development and deployment of generative AI. These include the principles of beneficence, non-maleficence, autonomy, and justice.19 The importance of human oversight, accountability, and the ability to intervene in AI systems has also been emphasized.20

Socioeconomic Implications of Generative AI

The socioeconomic implications of generative AI have been widely discussed by other scholars. On the one hand, these technologies have the potential to drive innovation, increase productivity, and create new economic opportunities.21 Generative AI can revolutionize industries such as healthcare, education, and creative sectors, leading to improved services and outcomes.22

On the other hand, researchers have also highlighted the potential for job displacement and workforce disruption as generative AI automates tasks previously performed by humans.23 Besides, the impact on creative industries and intellectual property rights has been a particular concern, as AI-generated content challenges traditional notions of authorship and originality.24

The literature also emphasizes the need for proactive strategies to manage the socioeconomic disruptions caused by generative AI and to ensure that the benefits of these technologies are distributed equitably.25 This includes investing in education and skill development to prepare the workforce for the AI-driven economy and developing policies that promote responsible AI development and deployment.

Governance Strategies for Generative AI

The literature on AI governance strategies emphasizes the need for a comprehensive and proactive approach to regulating generative AI. Researchers have highlighted the importance of establishing clear guidelines and standards for the development and deployment of AI systems to ensure their safety, fairness, and accountability.26

The US governance strategies for AI have been a particular focus of scholarly attention. The executive order “Maintaining US Leadership in Artificial Intelligence”27 and the “Making AI Work for the American People”28 have been discussed as efforts to promote AI innovation while addressing the challenges posed by strategic competitors. The Algorithmic Accountability Act and other legislative proposals have also been examined as attempts to establish a programmatic and accountable framework for AI applications.

However, some scholars also highlight the limitations of current governance strategies, particularly in terms of their ability to keep pace with the rapid advancements in generative AI.29 Researchers have emphasized the need for a more agile and adaptive regulatory framework that can respond to the evolving challenges posed by these technologies.

The importance of international collaboration and the development of global standards for AI governance has also been a recurring theme in the literature.30 Scholars have argued that a harmonized legal framework, grounded in the principles of human rights, transparency, and accountability, is essential to ensure the responsible development and deployment of generative AI across national borders.31

In conclusion, this section provides a comprehensive overview of the technological, ethical, socioeconomic, and governance aspects of generative AI, drawing from a wide range of scholarly sources. The insights from this section inform the subsequent discussions and recommendations in the article, ensuring that the proposed legal framework is grounded in a thorough understanding of the current state of knowledge in the field.

Legal Review and Analysis

Legal Risks of Content Generated by Generative AI

The content generated by generative AI is intricately linked to the data it receives as input. This connection between input data and output content gives rise to a wide array of value judgments that can vary significantly, leading to new legal, regulatory, and ethical challenges in the development and use of this technology. Among these challenges, data security emerges as a prominent concern, encompassing issues such as illegal data collection, algorithm abuse, difficulty in distinguishing genuine content from false information, the proliferation of misleading or deceptive content, insufficient protection of personal privacy, and potential copyright infringement.32

Data Source Compliance Risk

The ChatGPT model employs advanced natural language processing techniques, combining them with search engine functionalities.33 This integration allows the model to interpret and respond to text input, and, where necessary, conduct real-time searches to provide more accurate or contextually relevant responses. It constructs a reinforced learning model through pre-trained language and human feedback, connects numerous corpora, and processes large model sequence data using generative pre-training methods to obtain a versatile model representation.34 In essence, the model undergoes large-scale unsupervised corpus pre-training, which enables it to possess language understanding akin to the human brain and the ability to generate text with a certain degree of originality, enabling it to fulfill user instructions.35

The unsupervised nature of data mining and processing in the ChatGPT model obscures the origins of the data used, leading to concerns about the transparency and reliability of the AI’s content generation process. This opacity, as illustrated by Tull and Miller, creates a “black box” scenario where users are unaware of the algorithm’s data sources and objectives.36 Their work emphasizes the challenges in AI transparency and the risks posed by unfiltered data inputs, a concern echoed in recent AI research.

The lack of a filtering mechanism during ChatGPT’s pre-learning stage can result in the generation of erroneous or even illegal information, an issue amplified by the complex nature of deep learning algorithms. This aspect of AI technology, discussed in depth by Ray, highlights how different data inputs can lead to varied outputs, underscoring the importance of data quality and algorithmic accountability.37

From a computer science perspective, the security and integrity of deep learning algorithms are intimately tied to the data they process. Conducting security tests under static conditions, as suggested by Felderer et al., might reveal only a fraction of potential vulnerabilities.38 The challenge is to ensure data security and algorithmic reliability throughout the AI’s lifecycle, a topic that has gained traction in recent computer science discourse.

Sociologically, the “black box” nature of AI systems like ChatGPT raises questions about trust in technology and the ethical responsibility of developers and providers. The work of von Eschenbach in the sociological aspects of technology provides insights into how these technological advancements impact social trust and the relationship between users and AI systems.39

Algorithm Abuse Risk

The opacity of algorithms poses a significant challenge because of the technical nature of machine learning. As algorithms self-learn, the rules they create become difficult for humans to observe and comprehend at a technical level. From an external perspective, the decision-making rules of algorithms are often concealed by developers, resulting in a lack of transparency for the subjects of the decision-making process. This lack of transparency makes it challenging for individuals to understand the process and logic behind algorithmic decisions. To address the negative consequences arising from algorithmic opacity, many countries are actively promoting the disclosure and openness of corporate algorithms. This involves requiring algorithm service providers to disclose and explain the principles, logic, and decision-making processes of their algorithms. Additionally, decision-makers have the right to demand explanations of the algorithms being used.40

Algorithm Bias Risk

A major concern in the field of generative AI is the potential for algorithmic bias, a phenomenon intricately linked to data bias.41 In computer science, algorithmic bias is understood as systematic and repeatable errors in a computer system that create unfair outcomes, such as privileging one arbitrary group of users over others. When the data used to train generative AI models are skewed or contain prejudiced patterns, these biases can be inadvertently learned and perpetuated by the algorithm.

Insights from social studies further contextualize the impact of this bias. Research in this field indicates that algorithmic bias can reinforce societal inequalities.42 For example, if a generative AI model is trained on data that reflects historical biases, such as those found in employment or lending practices, the model’s outputs could potentially reflect and even amplify these biases.

Moreover, the inherent characteristics of generative AI, especially its capacity to learn and adapt, can exacerbate these issues. When biased algorithms are used repeatedly, they not only replicate but can also amplify discriminatory patterns, resulting in increasingly skewed content over time.43 This propagation of bias is not just a technical flaw; it raises ethical concerns about fairness and equity in AI-driven decisions.

Therefore, addressing algorithmic bias in generative AI is not merely a technical challenge but a socio-technical imperative. Developers and stakeholders must employ strategies like diverse dataset curation, bias detection algorithms, and continuous monitoring to mitigate these biases. Moreover, an interdisciplinary approach, encompassing insights from computer science, social sciences, and ethical considerations, is crucial to ensure that generative AI technologies are developed and deployed in a manner that is fair and nondiscriminatory.

Privacy Protection Risk

The development and application of generative AI technology, such as the ChatGPT model, brings to light potential risks concerning citizens’ personal information and privacy. Throughout the stages of training, application, and model optimization, the data used may contain sensitive personal information. If not appropriately processed, these data can result in the misuse of individuals’ personal information by large AI models, thereby infringing on their privacy rights. Risks to citizen privacy in relation to training data in large AI models can be attributed to improper data use at the data level or inadequate handling at the level of the large model itself.44 A notable case occurred in Italy, where authorities prohibited the use of ChatGPT because of its failure to comply with the European Union’s (EU’s) General Data Protection Regulation (GDPR). This decision was mainly because of the unauthorized handling of personal data.

The handling of training data in generative AI models raises significant privacy concerns at various levels. The source data often contain substantial personal information, which may not always be sufficiently anonymized or cleaned, thus posing a risk to individuals’ privacy. This situation is compounded when developers intentionally use data containing private information for training large AI models, a practice documented in the works of scholars like Müller and Bostrom, who discuss ethical considerations in AI development.45

If these data are not managed correctly, it could lead to two main types of privacy violations.

First, there is the risk of AI models themselves misusing information. This happens when, for example, an AI system trained on personal data starts generating outputs that reveal someone’s private details. Such instances, explored in the research of Zuboff, demonstrate how AI can inadvertently compromise privacy.46 This type of data misuse not only breaches individual privacy but also potentially leads to broader societal issues like discrimination. The sociological implications, as discussed by Zajko, include the erosion of trust in technology and the exacerbation of existing social inequalities.47

The second issue could come from mishandling data within the AI models. This problem includes weak security measures that render data vulnerable to hacking, allowing unauthorized individuals to access or misuse private data. This issue, examined in the cybersecurity research of Levy and Schneier, highlights the technical challenges in protecting AI systems from unauthorized access.48 It also covers unethical practices like using personal data for purposes individuals didn’t agree to, such as targeted political advertising or selling information to third parties.

The US Governance Strategies for AI

Based on the significant influence of ChatGPT and the importance of maintaining its international leading position in the field of AI, the United States can serve as an example by adopting a relatively open regulatory strategy for AI governance. This approach seeks to establish the credibility of AI models and ensure their adherence to legal and ethical standards. On April 11, 2023, the National Telecommunications and Information Administration, under the US Department of Commerce, solicited public comments on potential accountability measures for AI systems.

In 2019, the United States issued the executive order “Maintaining US Leadership in Artificial Intelligence”49 to strengthen its global leadership in AI governance. In the same year, President Trump signed the “American Artificial Intelligence Initiative,”50 which aims to address challenges from strategic competitors and maintain the country’s leading position in the AI field. This initiative calls for federal agencies to develop standards for the development and use of AI in various technologies and industries, promoting public trust in AI systems.

The United States introduced the Algorithmic Accountability Act in April 2019, which mandates impact assessments for “high-risk” automated decision-making systems. The Act outlines automated decision-making, its jurisdiction, and high-risk algorithms. It requires the evaluation of automated decision-making, particularly high-risk algorithms with privacy and security hazards, discriminatory effects, and rights repercussions based on multiple criteria. It also incorporates algorithms that monitor public spaces and handle sensitive personal data like race, gender, health, and criminal records.

The “Memorandum on Artificial Intelligence Application Supervision Guidelines (Draft),” issued by the United States in January 2020, establishes prudential monitoring. To promote AI research and development, the memorandum includes policy recommendations, frameworks, pilot programs, and voluntary consensus standards. In the same year, the National Artificial Intelligence Initiative Act was introduced to coordinate federal AI research and application to boost economic growth and security. This year, the “Generating Artificial Intelligence Cybersecurity Act” required the US Department of Commerce and the Federal Trade Commission to evaluate AI’s pros and cons. The Act also mandates a comparative analysis of foreign AI efforts to identify supply chain hazards and remedies. This prompted national AI strategy proposals. The “Algorithmic Fairness and Online Platform Transparency Act” of May 2021 required algorithm disclosure from users, regulators, and the public. It sought AI system openness and responsibility. The US Government Accountability Office released the “Artificial Intelligence Accountability Framework,” focusing on governance, data, performance, and monitoring, in July 2021.51 To assure AI system fairness, reliability, traceability, and governance, the framework specified essential practices, addressed challenges, and developed accountability processes.52

In 2022, the “Algorithm Accountability Act” mandated that companies conduct impact assessments for automated decision-making and key decision-making processes. They were also required to provide these impact assessment documents to relevant agencies. The assessments covered the potential negative impact on consumers and improvement measures, as well as the evaluation of system performance, privacy protection, data security, fairness, and nondiscrimination.53 AI is being used in decision-making that affects human lives and fundamental rights, making this vital. The “Artificial Intelligence Capabilities and Transparency Act” from the same year emphasizes the need to improve federal AI. It promotes the agile adoption of new AI technologies and emphasizes the necessity for transparency in AI procedures to maintain public trust and avoid the “black box” problem of AI.

Overall, the United States has introduced numerous bills to promote and govern AI. It follows a governance path that combines corporate self-regulation and government regulation. With guidance from the “Algorithmic Accountability Act,” the country has established a series of laws and regulations to create a programmatic and accountable framework for AI applications, particularly in public governance scenarios. The country’s approach to AI governance is a complex interplay of promoting innovation and addressing the multifaceted concerns that come with rapid technological advancement. These overarching strategies demonstrate the US government’s acknowledgment of AI as a transformative tool requiring holistic planning and coordination between various sectors and stakeholders.

AI Regulations in Other Countries

As the United States navigates its approach to AI governance, it is essential to consider the regulatory landscapes in other countries, particularly concerning generative AI.

European Union

The EU has been at the forefront of AI regulation, proposing the AI Act, which aims to create a comprehensive legal framework for AI. This risk-based approach categorizes AI systems based on their potential risks and imposes strict requirements for high-risk applications. The EU’s GDPR also plays a crucial role in regulating AI, particularly in terms of data protection and privacy. The GDPR’s principles of data minimization, purpose limitation, and explainability have significant implications for the development and deployment of generative AI models, which often rely on vast amounts of data and can be difficult to interpret.

China

China has ambitious plans to become a global leader in AI by 2030, as outlined in its “New Generation Artificial Intelligence Development Plan.” The country has also released a series of ethical principles and governance guidelines for AI, focusing on issues such as security, transparency, and controllability. The “Beijing AI Principles,” launched in 2019, emphasize the importance of human-centered AI and the need for international collaboration in AI governance. However, concerns have been raised about the potential misuse of AI technologies, particularly in the context of surveillance and social control.

Other Countries

Many other countries have developed their own AI strategies and regulations, addressing various aspects of AI development and deployment. For example: Canada’s Pan-Canadian Artificial Intelligence Strategy focuses on research, talent development, and the responsible use of AI. Japan’s AI Technology Strategy aims to promote the development and application of AI while ensuring its ethical and safe use. The UK’s National AI Strategy outlines plans to invest in AI research, build a resilient AI ecosystem, and establish a pro-innovation regulatory environment.

Although these strategies and regulations vary in their scope and emphasis, they all recognize the need for a balanced approach to AI governance that promotes innovation while addressing the ethical, social, and legal challenges posed by generative AI and other AI technologies.

Implications for Legal and Ethical Norms

The US governance strategies, although forward-thinking, illuminate the challenges in balancing technological growth with ethical, social, and legal norms. The flexibility of the current US regulatory environment for AI fosters innovation and allows the United States to remain a global AI leader. However, it also raises concerns about whether this approach is sufficient to safeguard against the myriad risks that unfettered AI development may pose.

The ethical implications are profound, especially concerning generative AI technologies. Current US strategies are an endeavor to mitigate risks related to privacy intrusions, biases, and potential abuses of AI in surveillance and decision-making. Yet, they also highlight the reactive nature of policymaking in this sphere, where legislation is often scrambling to catch up with fast paced technological advancements.

Legally, the current frameworks begin to address issues like intellectual property disputes, misinformation, deepfakes, and data privacy breaches. However, the environment remains fragmented, lacking a comprehensive legal structure that covers the breadth of potential AI-generated content and its consequences.

The integration of AI into critical sectors like healthcare, judiciary, and defense adds layers of complexity, necessitating stringent standards for accuracy, reliability, and fairness. Although current strategies are still in their nascent stages, they have sparked a national discourse on the ethical use of AI, signaling a gradual shift toward more inclusive, transparent, and accountable AI practices.

In conclusion, the US’s strategies for AI governance are evolving, marked by concerted efforts to maintain global leadership while grappling with the domestic implications of AI’s integration into societal frameworks. This approach is a blend of strategically fostering AI innovation and piecemeal legislative responses to emerging challenges. As AI’s influence continues to grow, the United States faces the imperative of strengthening its governance structure, requiring a nuanced strategy that transcends technological aspects to encompass ethical, social, and legal considerations. The United States must engage in international collaboration and dialogue to develop harmonized standards and best practices for the responsible development and deployment of generative AI. By learning from the experiences of other countries and contributing to the development of international frameworks, the United States can help shape a global regulatory environment that fosters innovation, protects individual rights, and upholds ethical principles in the age of generative AI.

Case Studies

Case I: OpenAI and GDPR Compliance Challenges

OpenAI recently faced inquiries regarding whether its generative AI platform, ChatGPT, aligns with the privacy standards set by the EU.54 A formal complaint in Poland initiated an investigation, alleging that OpenAI had committed multiple breaches of the GDPR.55

The complaint, filed by privacy researcher Lukasz Olejnik, pointed out several instances of GDPR noncompliance.56 It centered on OpenAI’s handling of personal data, particularly their response (or lack thereof) to requests for data correction and access. Olejnik’s concerns reflect a broader anxiety over AI systems’ compliance with stringent EU data protection principles, especially concerning transparency, fairness, and user control over personal data.

This case further explains the challenges of ensuring that AI systems adhere to GDPR regulations. The underlying technology of ChatGPT, a large language model (LLM), is trained on vast amounts of data, potentially including sensitive personal information. The investigation focused on whether OpenAI had a lawful basis for data processing, met transparency and fairness requirements, and upheld data subjects’ rights under the GDPR.57

This investigation has significant implications for the generative AI sector, especially as GDPR compliance is essential for operating within the EU. The outcome could influence how AI technologies are developed and managed, stressing the importance of privacy by design—a foundational principle of GDPR. Moreover, it poses questions about how international companies like OpenAI, headquartered outside the EU, can meet local privacy standards and what mechanisms are in place for cross-border regulatory cooperation.

Current Developments

The Polish Personal Data Protection Office (UODO) has demonstrated urgency and transparency in addressing the complaint, indicative of the growing EU-wide focus on regulating AI technologies.58 Alongside the Polish inquiry, Italy’s Data Protection Authority (DPA) had also taken action, highlighting a pattern of increasing regulatory scrutiny of AI across the EU.59

The resolution of this case could set a precedent for how privacy laws apply to generative AI globally. It may prompt OpenAI and similar companies to revise their data processing practices, with possible broader effects on the industry’s approach to privacy by design.

Conclusion

This case study exemplifies the intersection of innovative technology and data protection law, underscoring the importance of GDPR compliance for AI companies. The evolving situation offers a critical lens through which to examine how legal frameworks might adapt to address the complexities introduced by generative AI systems like ChatGPT.

Case II: Adopting Generative AI in Legal Adjudication

In February 2023, Judge Juan Manuel Padilla from Cartagena, Colombia, made headlines by using ChatGPT to assist in a legal ruling.60 This case revolved around determining who should bear the costs of healthcare for a child with autism. Judge Padilla’s decision was informed by ChatGPT in addition to standard legal references.

This case is evaluated from an interdisciplinary perspective, considering legal, sociological, and technological dimensions.61 It critically assesses the role of AI-generated advice relative to established legal precedents and the interpretation of statutes.

The employment of AI tools like ChatGPT in judicial reasoning ignites important discussions. Judge Padilla’s use of ChatGPT highlights issues of dependability and the role of AI in complex legal decisions. Although AI’s efficiency in judicial processes is recognized, there is a need for caution because of its potential for unpredictable outcomes and occasional errors.

Sociological and Legal Perspectives

The influence of AI on the roles of professionals, especially in fields heavily dependent on human discretion, is considered from a sociological viewpoint. From a legal standpoint, the case indicates a significant shift, challenging us to rethink how we balance machine-generated insights and human decision-making within the legal sector.

Technological Evaluation

Technologically, this case draws attention to the advanced capabilities of natural language processing (NLP) tools like ChatGPT. These tools can process and condense extensive legal information, yet they do not replace the essential role of human expertise and evaluative judgment.

This case acts as a critical point of reference for the growing intersection of AI with judicial practices. It signifies a gradual trend toward embracing AI support within legal systems around the world, ensuring that such innovations serve as a supplement to, rather than a replacement for, human judgment. The Colombian judiciary’s approach illustrates a careful yet forward-looking application of technology, prioritizing human discernment in the dispensation of justice.

In conclusion, the case studies of OpenAI’s GDPR compliance challenges and the adoption of generative AI in legal adjudication underscore the pressing regulatory issues surrounding this technology. OpenAI’s case illustrates the difficulties in ensuring AI systems adhere to data protection regulations, particularly concerning transparency, fairness, and user control over personal data. Meanwhile, the use of ChatGPT in a legal ruling in Colombia highlights the need for clear guidelines on the role of AI in complex decision-making processes, ensuring accountability and alignment with legal and ethical principles. These cases demonstrate the urgency for comprehensive legal frameworks that can address the unique challenges posed by generative AI, balancing innovation with the protection of individual rights and societal values.

Discussion

The Disruptive Impact of Generative AI

The core of generative AI’s disruptive capacity lies in its autonomy and unpredictability.62 Unlike traditional software, generative AI does not require explicit programming to produce specific outputs. Instead, it understands patterns and relationships within the training data to conceive outputs based on learned criteria, sometimes resulting in content that may be novel, unexpected, or ethically ambiguous.

Generative AI’s integration into societal frameworks has been transformative. Its ability to produce realistic synthetic media raises immediate ethical concerns in many fields, including education, human resources management, and industry.63 Deepfakes, in particular, enable identity theft, misinformation, and fake news, eroding public trust in digital content.64 For instance, the generation of nonconsensual synthetic imagery compromises individuals’ dignity, privacy, and public image, creating legal quagmires around defamation and consent.

Additionally, generative AI holds significant implications for intellectual property rights.65 As these systems create artworks, music, or literary pieces, they challenge traditional notions of authorship and originality. The current legal frameworks in many jurisdictions are ill-equipped to handle questions of copyright for AI-generated works, creating a vacuum of legal uncertainty.

Moreover, the AI’s role in decision-making, especially in judicial or law enforcement contexts, can be fraught with bias, particularly if the AI’s training data reflects preexisting societal biases. The opacity of these systems, often described as “black boxes,” complicates accountability and undermines the principles of justice, given that the rationale behind AI-generated decisions may be untraceable or incomprehensible to humans.

Economically, generative AI offers a double-edged sword. On the one hand, it promises substantial efficiencies and cost savings for businesses, reducing the need for human labor in creative endeavors such as design, writing, or programming.66 It accelerates innovation by proposing multiple solutions or ideas, reducing time to market, and potentially unlocking new types of services and products.

Conversely, these efficiencies come with inherent risks. The potential displacement of jobs because of automation, particularly in creative industries, raises economic and social concerns. There’s a looming anxiety over “technological unemployment,” a term coined to describe job loss resulting from technological advancement.

Beyond labor displacement, generative AI introduces market risks. Its capacity for mass-producing convincing counterfeit content or flooding markets with AI-generated products might destabilize industries, disrupt pricing, and erode consumer trust. Furthermore, the technology could be employed maliciously, for instance, in cyberattacks or market manipulations, which could have far-reaching economic consequences.

In reflection, generative AI’s disruptive impact is profound. Although it presents avenues for immense economic growth and innovation, it simultaneously ushers in complex ethical dilemmas and socioeconomic risks. Its propensity to autonomously generate content, for better or worse, necessitates a robust ethical and legal framework that recognizes the nuances of this technology. As such, societies, particularly legislators and policymakers, find themselves at a crossroads in harnessing its potential while mitigating the risks it poses to foundational societal norms and economic structures.

Harmonize the Relationship between “Disruptive Innovation” and “Experimental Regulation”

Generative AI represents a “disruptive innovation” that will profoundly impact various aspects of the traditional technology product market. This concept is rooted in economic theory, particularly in the “creative destruction” theory proposed by Austrian economist Schumpeter.67 According to Schumpeter, every inventor of a new product initially enjoys a monopoly position, which serves as a crucial incentive for them to introduce novel products and methods to the market.68 However, as competitors emerge and imitate the new products, the initial advantage of the inventor diminishes, leading to a dynamic process of innovation and imitation that characterizes true competition.

In the context of generative AI, such as ChatGPT, it can be seen as a disruptive innovation compared to traditional weak AI, like intelligent customer service. Several key characteristics highlight this disruption:

  1. Originated from a new market: Generative AI has opened up new possibilities and applications, creating a distinct market segment that did not previously exist.

  2. Addressing consumer needs: Many technology companies have prioritized their own profits over continuous investment in technological innovation, resulting in higher costs and limited access to products for low-end consumers. Generative AI, like ChatGPT, caters to the demands of these consumers by providing more accessible and suitable products.

  3. Unique operating mechanism: Generative AI operates differently from traditional computer translation and search engines, following an independent technical path.

  4. Continuous performance improvement: Products and services offered by generative AI, including ChatGPT, have continually enhanced their performance. This has significantly influenced users’ daily lives and work habits, gradually replacing traditional tools and services.69

The disruptive innovation brought about by generative AI has compelled regulatory reforms in AI governance, giving rise to new regulatory models like “experimental supervision.” This approach emphasizes a flexible and adaptable supervisory method, creating a “buffer zone” within the current regulatory framework to determine the most appropriate level of supervision for new technologies and business models. Regulators are encouraged to adopt an inclusive and prudent regulatory strategy, proactively examining technological innovations and tapping into their potential to stimulate technological dividends. In the face of technological innovation, it is crucial to strike the right balance in supervision. Insufficient supervision can lead to the accumulation of risks, whereas excessive supervision may stifle innovation. Hence, achieving optimized generative AI governance necessitates finding harmony between technological supervision and innovation.70

Identifying the Gaps in Current AI Legislation

The rapid evolution of AI technologies has revealed gaps in legislation, primarily because of the reactive nature of legal responses. Existing laws often fail to anticipate the transformative impact of AI, leading to ambiguities in accountability, ethical quandaries, and regulatory mismatches.

For instance, current legislation does not fully address the complexity of intellectual property (IP) rights in creations by generative AI. Traditional IP laws hinge on human authorship, a concept challenged by AI-generated content, leading to contested debates on ownership, infringement, and compensation.

Additionally, there’s an accountability vacuum. When AI systems make autonomous decisions, traditional legal frameworks falter in assigning liability, especially in cases where decisions lead to harmful outcomes. This lack of clarity could potentially shield individuals and entities from being held responsible for AI’s actions, underlining the urgency for explicit legal standards on AI accountability.

Privacy laws, too, are under strain. Generative AI’s capability to produce deepfakes and synthetic data blurs the lines of consent, identity rights, and data protection. Current privacy regulations are not robust enough to counter these emerging threats, necessitating stronger data governance laws.

Intersecting AI with Ethical and Social Norms

Beyond legal technicalities, the integration of AI into societal frameworks presents ethical dilemmas. The principles of fairness, justice, and equality are at risk, as algorithmic biases could lead to discriminatory practices, reinforcing societal inequities under the guise of automated neutrality.

The solution isn’t merely legislative; it requires a recalibration of the AI development process itself. There must be an interdisciplinary approach to AI ethics, incorporating diverse stakeholder perspectives in the development and deployment of AI technologies.71 This includes robust public discourse, collaboration between technologists and policymakers, and meaningful engagement with social norms and values.

There is also a need for greater transparency and explainability in AI systems. As legal standards evolve, AI systems must be designed to provide rationales for their decisions, especially in high-stakes domains like healthcare, law enforcement, and financial services. This transparency is crucial for validating AI decisions against ethical, legal, and social benchmarks.

Proposing a Harmonized Legal Framework

Moving toward a harmonized legal framework for AI requires a comprehensive strategy. First, there needs to be an international consensus on the ethical and legal standards governing AI. Although AI innovation operates on a global scale, laws remain largely national or regional, leading to potential conflicts and inconsistencies. Collaborative international frameworks, developed in consultation with global AI experts, ethicists, and legal scholars, can provide a cohesive foundation for national laws.

Next, a proactive legislative approach is essential, where laws are not just responses to technology-induced incidents but are frameworks developed with foresight, anticipating future AI advancements and their societal implications.

Furthermore, creating a dedicated regulatory body for AI could streamline the governance process. This agency would oversee AI development, ensuring compliance with legal and ethical standards, enforcing accountability measures, and providing a structured platform for redressal of AI-related grievances.

Lastly, legal education and public literacy on AI need enhancement. As AI becomes more entrenched in daily life, a broader understanding of its workings, limitations, and legal implications is imperative for informed public participation in AI governance.

Engagement with Industry and Civil Society

A harmonized legal framework is not solely a governmental endeavor. It demands active engagement with AI industry stakeholders and civil society. Partnerships with tech companies can facilitate compliance and self-regulation, where organizations commit to ethical standards in AI development, data handling, and user privacy.

Civil society, too, plays a crucial role. Public advocacy groups, consumer rights organizations, and academic institutions can provide valuable insights into societal expectations from AI, driving legislative and policy measures that reflect the public interest. Moreover, they can hold both governments and corporations accountable, ensuring that AI serves the public good.

In essence, bridging the legal gaps in AI governance is a complex, dynamic task. It requires not only an overhaul of existing legal frameworks to accommodate the unique challenges posed by AI but also an integrative approach that aligns legal, ethical, and social norms with technological advancements. As generative AI continues to push boundaries, the need for a harmonized, comprehensive, and proactive legal system becomes increasingly paramount. This is not just a question of law but a societal imperative to ensure that AI evolves in a manner that is consistent with democratic values, human rights, and the principles of justice and equity.72

Conclusion

As we stand on the brink of a new era shaped by the rapid development of generative AI, we face unprecedented legal, ethical, and social challenges. This article has explored the complex landscape of AI’s disruptive impact, the current US governance strategies, and the urgent need for a harmonized legal framework. As generative AI becomes more integrated into our daily lives, developing a comprehensive and proactive legal approach is not just desirable but necessary.

Recognizing AI’s disruptive potential is the first step in this journey. Generative AI’s ability to create content, make decisions, and perform tasks that traditionally required human intelligence is a testament to human ingenuity. However, this progress also brings critical issues to the forefront, such as IP rights, privacy, consent, and the nature of human creativity. As these technologies advance, the legal system must address fundamental questions about authorship, ownership, and the potential for misuse.

Examining US governance strategies for AI reveals efforts to balance innovation and the public interest. Although these initiatives are commendable, they often fall short of addressing the multifaceted challenges posed by AI. The current legislation, largely reactive and fragmented, struggles to keep pace with the rapid evolution of AI technology. The lack of global harmonization in legal standards further complicates the regulatory landscape, potentially hindering international cooperation and ethical consensus.

A robust, harmonized legal framework is essential. This article proposes an integrated approach that emphasizes international collaboration, proactive legislation, the establishment of a dedicated regulatory body, and an inclusive dialogue with diverse stakeholders. Such a framework aims to align the law with societal norms, ethical considerations, and the transformative nature of AI.

Engaging industry and civil society is crucial in this endeavor. The AI industry must actively participate in this regulatory overhaul, prioritizing transparency, ethical compliance, and public welfare in AI development. Civil society plays a vital role in shaping public discourse, setting societal benchmarks for AI, and holding power structures accountable.

As generative AI technologies advance, so must our legal frameworks, ethical guidelines, and societal norms. This is not merely a legal imperative but a commitment to upholding human dignity, equity, and justice in the face of this new technological era. Striking a balance between enabling innovation and safeguarding human interests is delicate, but it is a balance we must strive to achieve. The future of AI reflects not only human ingenuity but also our values, principles, and humanity.

This article makes significant contributions to the field of generative AI by offering a multifaceted analysis that blends legal, technological, and societal perspectives. Our doctrinal legal analysis identifies gaps in AI regulation, contributing to the discourse on necessary legal reforms and providing a foundation for future legislative development.

Notes

1.

Oleksandr Striuk, Yuriy Kondratenko, Ievgen Sidenko, and Alla Vorobyova, “Generative Adversarial Neural Network for Creating Photorealistic Images,” in 2020 IEEE 2nd International Conference on Advanced Trends in Information Theory (ATIT) (Kyiv, Ukraine: IEEE, 2020), 368–71, https://ieeexplore.ieee.org/abstract/document/9349326/; Rajkumar Palaniappan, Kenneth Sundaraj, and Sebastian Sundaraj, “Artificial Intelligence Techniques Used in Respiratory Sound Analysis—A Systematic Review,” Biomedizinische Technik/Biomedical Engineering 59, no. 1 (January 1, 2014), https://doi.org/10.1515/bmt-2013-0074; Kesavan Venkatesh, Samantha M. Santomartino, Jeremias Sulam, and Paul H. Yi, “Code and Data Sharing Practices in the Radiology Artificial Intelligence Literature: A Meta-Research Study,” Radiology: Artificial Intelligence 4, no. 5 (September 1, 2022): e220081, https://doi.org/10.1148/ryai.220081; Mohammad Javed Ali and Ali Djalilian, “Readership Awareness Series—Paper 4: Chatbots and ChatGPT—Ethical Considerations in Scientific Publications,” Seminars in Ophthalmology 38, no. 5 (July 4, 2023): 403–4, https://doi.org/10.1080/08820538.2023.2193444.

2.

Mladan Jovanovic and Mark Campbell, “Generative Artificial Intelligence: Trends and Prospects,” Computer 55, no. 10 (2022): 107–12.

3.

Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever, “Language Models Are Unsupervised Multitask Learners,” OpenAI Blog 1, no. 8 (2019): 9.

4.

Mariarosaria Taddeo and Luciano Floridi, “How AI Can Be a Force for Good,” Science 361, no. 6404 (August 24, 2018): 751–52, https://doi.org/10.1126/science.aat5991.

5.

Taddeo and Floridi, “How AI Can Be a Force for Good.”

6.

Virginia Eubanks, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor (St. Martin’s Publishing Group, 2018).

7.

Philipp Hacker, Andreas Engel, and Marco Mauer, “Regulating ChatGPT and Other Large Generative AI Models,” in 2023 ACM Conference on Fairness, Accountability, and Transparency (Chicago IL: ACM, 2023), 1112–23, https://doi.org/10.1145/3593013.3594067.

9.

Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio, “Generative Adversarial Nets,” Advances in Neural Information Processing Systems 27 (2014), https://proceedings.neurips.cc/paper/5423-generative-adversarial-nets; Diederik P. Kingma and Max Welling, “Auto-Encoding Variational Bayes,” arXiv (December 10, 2022), http://arxiv.org/abs/1312.6114; Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D. Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, and Amanda Askell, “Language Models Are Few-Shot Learners,” Advances in Neural Information Processing Systems 33 (2020): 1877–1901.

10.

Carl Vondrick, Hamed Pirsiavash, and Antonio Torralba, “Generating Videos with Scene Dynamics,” Advances in Neural Information Processing Systems 29 (2016), https://proceedings.neurips.cc/paper/2016/hash/04025959b191f8f9de3f924f0940515f-Abstract.html.

11.

Mario Lucic, Karol Kurach, Marcin Michalski, Sylvain Gelly, and Olivier Bousquet, “Are Gans Created Equal? A Large-Scale Study,” Advances in Neural Information Processing Systems 31 (2018), https://proceedings.neurips.cc/paper/7350-are-gans-created-equal-a-large-scale-study.

12.

Daniel Adiwardana, Minh-Thang Luong, David R. So, Jamie Hall, Noah Fiedel, Romal Thoppilan, Zi Yang, et al., “Towards a Human-like Open-Domain Chatbot,” arXiv (February 27, 2020), https://doi.org/10.48550/arXiv.2001.09977.

13.

Lars Mescheder, Andreas Geiger, and Sebastian Nowozin, “Which Training Methods for Gans Do Actually Converge?” In Proceedings of the 35th International Conference on Machine Learning 80 (2018): 3481–90, https://proceedings.mlr.press/v80/mescheder18a.

14.

Tero Karras, Samuli Laine, and Timo Aila, “A Style-Based Generator Architecture for Generative Adversarial Networks,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, 4401–10, http://openaccess.thecvf.com/content_CVPR_2019/html/Karras_A_Style-Based_Generator_Architecture_for_Generative_Adversarial_Networks_CVPR_2019_paper.html.

15.

Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai, “Man Is to Computer Programmer as Woman Is to Homemaker? Debiasing Word Embeddings,” in Advances in Neural Information Processing Systems, Vol. 29 (Curran Associates, Inc., 2016), https://proceedings.neurips.cc/paper_files/paper/2016/hash/a486cd07e4ac3d270571622f4f316ec5-Abstract.html.

16.

Cynthia Rudin, “Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead,” Nature Machine Intelligence 1, no. 5 (2019): 206–15.

17.

Michael Veale, Max Van Kleek, and Reuben Binns, “Fairness and Accountability Design Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making,” in Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal, QC: ACM, 2018), 1–14, https://doi.org/10.1145/3173574.3174014.

18.

Mika Westerlund, “The Emergence of Deepfake Technology: A Review,” Technology Innovation Management Review 9, no. 11 (2019), https://doi.org/10.22215/timreview/1282.

19.

Brent Daniel Mittelstadt, Patrick Allo, Mariarosaria Taddeo, Sandra Wachter, and Luciano Floridi, “The Ethics of Algorithms: Mapping the Debate,” Big Data & Society 3, no. 2 (December 2016), https://doi.org/10.1177/2053951716679679.

20.

Saleema Amershi, Dan Weld, Mihaela Vorvoreanu, Adam Fourney, Besmira Nushi, Penny Collisson, Jina Suh, et al., “Guidelines for Human-AI Interaction,” in Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland: ACM, 2019), 1–13, https://doi.org/10.1145/3290605.3300233.

21.

Ed Felten, Manav Raj, and Robert Seamans, “How Will Language Modelers like ChatGPT Affect Occupations and Industries?” arXiv (March 18, 2023), http://arxiv.org/abs/2303.01157.

22.

Reza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov, “Membership Inference Attacks against Machine Learning Models,” in 2017 IEEE Symposium on Security and Privacy (SP) (San Jose, CA, 2017), 3–18. https://doi.org/10.1109/SP.2017.41.

23.

Jan Smits and Tijn Borghuis, “Generative AI and Intellectual Property Rights,” in Law and Artificial Intelligence, ed. Bart Custers and Eduard Fosch-Villaronga, Vol. 35, Information Technology and Law Series (The Hague: T.M.C. Asser Press, 2022), 323–44, https://doi.org/10.1007/978-94-6265-523-2_17.

24.

Nicola Lucchi, “ChatGPT: A Case Study on Copyright Challenges for Generative Artificial Intelligence Systems,” European Journal of Risk Regulation, 2023, 1–23, https://doi.org/10.1017/err.2023.59.

25.

Nishith Reddy Mannuru, Sakib Shahriar, Zoë A Teel, Ting Wang, Brady D. Lund, Solomon Tijani, Chalermchai Oak Pohboon, et al., “Artificial Intelligence in Developing Countries: The Impact of Generative Artificial Intelligence (AI) Technologies for Development,” Information Development (September 14, 2023), https://doi.org/10.1177/02666669231200628.

26.

Urs Gasser and Virgilio A.F. Almeida, “A Layered Model for AI Governance,” IEEE Internet Computing 21, no. 6 (2017): 58–62.

27.

D. Trump, “Maintaining American Leadership in Artificial Intelligence,” Federal Register, 2019, https://www.Federalregister.Gov/Documents/2019/02/14/2019-02544/Maintaining-American-Leadership-in-Artificial-Intelligence.

28.

Hacker et al., “Regulating ChatGPT and Other Large Generative AI Models.”

29.

Ibid.

30.

Olivia J. Erdélyi and Judy Goldsmith, “Regulating Artificial Intelligence: Proposal for a Global Solution,” in Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society (New Orleans, LA: ACM, 2018), 95–101, https://doi.org/10.1145/3278721.3278731.

31.

Magali Eben, Kristofer Erickson, Martin Kretschmer, Gabriele Cifrodelli, Zihao Li, Stefan Luca, Bartolomeo Meletti, and Philip Schlesinger, “Priorities for Generative AI Regulation in the UK: CREATe Response to the Digital Regulation Cooperation Forum (DRCF),” (Enlighten Publications, 2023), https://eprints.gla.ac.uk/306163/.

32.

Oren Bar-Gill, “Algorithmic Price Discrimination: When Demand Is a Function of Both Preferences and (Mis)Perceptions,” SSRN Scholarly Paper (Rochester, NY, May 29, 2018), https://papers.ssrn.com/abstract=3184533.

33.

Abid Haleem, Mohd Javaid, and Ravi Pratap Singh, “An Era of ChatGPT as a Significant Futuristic Support Tool: A Study on Features, Abilities, and Challenges,” BenchCouncil Transactions on Benchmarks, Standards and Evaluations 2, no. 4 (2022): 100089.

34.

Jeffrey A. Lefstin, Peter S. Menell, and David O. Taylor, “Final Report of the Berkeley Center for Law & Technology Section 101 Workshop: Addressing Patent Eligibility Challenges,” Berkeley Technology Law Journal 33 (2018): 551.

35.

Konstantinos I. Roumeliotis and Nikolaos D. Tselikas, “ChatGPT and Open-AI Models: A Preliminary Review,” Future Internet 15, no. 6 (2023): 192, https://doi.org/10.3390/fi15060192.

36.

Susan Y. Tull and Paula E. Miller. “Patenting Artificial Intelligence: Issues of Obviousness, Inventorship, and Patent Eligibility,” Journal of Robotics, Artificial Intelligence & Law 1 (2018): 313.

37.

Partha Pratim Ray, “ChatGPT: A Comprehensive Review on Background, Applications, Key Challenges, Bias, Ethics, Limitations and Future Scope,” Internet of Things and Cyber-Physical Systems 3 (2023): 121–54, https://www.sciencedirect.com/science/article/pii/S266734522300024X.

38.

Michael Felderer, Matthias Büchler, Martin Johns, Achim D. Brucker, Ruth Breu, and Alexander Pretschner, “Security Testing: A Survey,” in Advances in Computers, ed. Atif Memon, Vol. 101 (College Park, MD: Elsevier, 2016), 1–51. https://www.sciencedirect.com/science/article/pii/S0065245815000649.

39.

Warren J. Von Eschenbach, “Transparency and the Black Box Problem: Why We Do Not Trust AI,” Philosophy & Technology 34, no. 4 (December 2021): 1607–22. https://doi.org/10.1007/s13347-021-00477-0.

40.

Shlomit Yanisky-Ravid, “Generating Rembrandt: Artificial Intelligence, Copyright, and Accountability in the 3A Era: The Human-like Authors Are Already Here: A New Model,” Michigan State Law Review (2017): 659.

41.

Mélanie Bernhardt, Charles Jones, and Ben Glocker, “Potential Sources of Dataset Bias Complicate Investigation of Underdiagnosis by Machine Learning Algorithms,” Nature Medicine 28, no. 6 (June 2022): 1157–58, https://doi.org/10.1038/s41591-022-01846-8.

42.

Eubanks, Automating Inequality.

43.

Yanisky-Ravid, “Generating Rembrandt.”

44.

Laurel Witt, “Preventing the Rogue Bot Journalist: Protection from Non-Human Defamation,” Colorado Technology Law Journal 15 (2016): 517.

45.

Vincent C. Müller and Nick Bostrom. “Future Progress in Artificial Intelligence: A Survey of Expert Opinion,” in Fundamental Issues of Artificial Intelligence, ed. Vincent C. Müller, 376. Synthese Library (Cham: Springer International Publishing, 2016), 555–72. https://doi.org/10.1007/978-3-319-26485-1_33.

46.

Shoshana Zuboff, “Surveillance Capitalism and the Challenge of Collective Action,” New Labor Forum 28, no. 1, (2019): 10–29, https://journals.sagepub.com/doi/full/10.1177/1095796018819461.

47.

M. Zajko, “Conservative AI and Social Inequality: Conceptualizing Alternatives to Bias through Social Theory,” AI & Society 36 (2021): 1047–56, https://link.springer.com/article/10.1007/S00146-021-01153-9.

48.

Karen Levy and Bruce Schneier, “Privacy Threats in Intimate Relationships,” Journal of Cybersecurity 6, no. 1 (1 January 2020): tyaa006, https://doi.org/10.1093/cybsec/tyaa006.

49.

Trump, “Maintaining American Leadership in Artificial Intelligence.”

50.

Hacker et al., “Regulating ChatGPT and Other Large Generative AI Models.”

51.

Nathalie De Marcellis-Warin, Nathalie, Frédéric Marty, Eva Thelisson, and Thierry Warin, “Artificial Intelligence and Consumer Manipulations: From Consumer’s Counter Algorithms to Firm’s Self-Regulation Tools,” AI and Ethics 2, no. 2 (May 2022): 259–68, https://doi.org/10.1007/s43681-022-00149-5.

52.

Bibhu Dash and Pawankumar Sharma, “Are ChatGPT and Deepfake Algorithms Endangering the Cybersecurity Industry? A Review,” International Journal of Engineering and Applied Sciences 10, no. 1 (2023): 21–39.

53.

Alexander J.A.M. Van Deursen and Ellen J. Helsper, “The Third-Level Digital Divide: Who Benefits Most from Being Online?” In Communication and Information Technologies Annual, Vol. 10 (Emerald Group Publishing Limited, 2015), 29–52, https://www.emerald.com/insight/content/doi/10.1108/s2050-206020150000010002./full/html.

54.

Giorgio Franceschelli and Mirco Musolesi, “Copyright in Generative Deep Learning,” Data & Policy 4 (2022): e17. https://doi.org/10.1017/dap.2022.10.

55.

Francesco Paolo Levantino, “Generative and AI-Powered Oracles: ‘What Will They Say about You?’” Computer Law & Security Review 51 (1 November 2023): 105898, https://doi.org/10.1016/j.clsr.2023.105898.

56.

Lukasz Olejnik, “On the Governance of Privacy-Preserving Systems for the Web: Should Privacy Sandbox Be Governed?,” in Handbook on the Politics and Governance of Big Data and Artificial Intelligence (Edward Elgar Publishing, 2023), 279–314, https://www.elgaronline.com/edcollchap/book/9781800887374/book-part-9781800887374-22.xml.

57.

Timo Minssen, Effy Vayena, and I. Glenn Cohen, “The Challenges for Regulating Medical Use of ChatGPT and Other Large Language Models,” JAMA, 330, no. 4 (2023): 315–16, https://doi.org/10.1001/jama.2023.9651.

58.

Zdzislaw Polkowski, “The Method of Implementing the General Data Protection Regulation in Business and Administration,” in 2018 10th International Conference on Electronics, Computers and Artificial Intelligence (ECAI) (IEEE, 2018), 1–6, https://ieeexplore.ieee.org/abstract/document/8679062/.

59.

Pier Giorgio Chiara, “Italian DPA v. OpenAI’s ChatGPT: The Reasons Behind the Investigations and the Temporary Limitation to Processing,” Journal of Law and Technology 31 (2023): 2.

60.

Nishant A. Parikh, “Empowering Business Transformation: The Positive Impact and Ethical Considerations of Generative AI in Software Product Management—A Systematic Literature Review,” Transformational Interventions for Business, Technology, and Healthcare, ed. Darrell Norman Burrell (Hershey, PA: IGI Global, 2023), 269–93.

61.

Pritish Gandhi and Vineet Talwar, “Artificial Intelligence and ChatGPT in the Legal Context,” International Journal of Medical Sciences 10 (2023): 1–2; Ömer Aydın and Enis Karaarslan, “Is ChatGPT Leading Generative AI? What Is beyond Expectations?,” Academic Platform Journal of Engineering and Smart Systems 11, no. 3 (2023): 118–34; Muhammad Usman Hadi, Qasem Al Tashi, Rizwan Qureshi, Abbas Shah, Amgad Muneer, Muhammad Irfan, Anas Zafar, et al., “Large Language Models: A Comprehensive Survey of Its Applications, Challenges, Limitations, and Future Prospects,” TechRxiv (November 16, 2023), https://doi.org/10.36227/techrxiv.23589741.v4.

62.

Parikh, “Empowering Business Transformation.”

63.

Eben et al., “Priorities for Generative AI Regulation in the UK”; Pawan Budhwar, Soumyadeb Chowdhury, Geoffrey Wood, Herman Aguinis, Greg J. Bamber, Jose R. Beltran, Paul Boselie, et al., “Human Resource Management in the Age of Generative Artificial Intelligence: Perspectives and Research Directions on ChatGPT,” Human Resource Management Journal 33, no. 3 (July 2023): 606–59, https://doi.org/10.1111/1748-8583.12524; Nitin Rane, “ChatGPT and Similar Generative Artificial Intelligence (AI) for Smart Industry: Role, Challenges and Opportunities for Industry 4.0, Industry 5.0 and Society 5.0,” Challenges and Opportunities for Industry 4 (2023), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4603234.

64.

Luigi De Angelis, Francesco Baglivo, Guglielmo Arzilli, Gaetano Pierpaolo Privitera, Paolo Ferragina, Alberto Eugenio Tozzi, and Caterina Rizzo, “ChatGPT and the Rise of Large Language Models: The New AI-Driven Infodemic Threat in Public Health,” Frontiers in Public Health 11 (25 April 2023): 1166120, https://doi.org/10.3389/fpubh.2023.1166120.

65.

Jessica L. Gillotte, “Copyright Infringement in AI-Generated Artworks,” UC Davis Law Review 53 (2019): 2655.

66.

Daniel Houli, Marie L. Radford, and Vivek K. Singh, “‘COVID19 Is_’: The Perpetuation of Coronavirus Conspiracy Theories via Google Autocomplete,” Proceedings of the Association for Information Science and Technology 58, no. 1 (October 2021): 218–29, https://doi.org/10.1002/pra2.450.

67.

Uri Gal, “ChatGPT Is a Data Privacy Nightmare. If You’ve Ever Posted Online, You Ought to Be Concerned,” The Conversation, 2023.

68.

D. Hillemann, “Does ChatGPT Comply with EU GDPR Regulations? Investigating the Right to be Forgotten,” Downloaded, 2023.

69.

Haleem et al., “An Era of ChatGPT as a Significant Futuristic Support Tool.”

70.

Ariel Ezrachi, Virtual Competition: The Promise and Perils of the Algorithm-Driven Economy (Harvard University Press, 2016), https://doi.org/10.4159/9780674973336.

71.

Michael A. Peters, Liz Jackson, Marianna Papastephanou, Petar Jandrić, George Lazaroiu, Colin W. Evers, Bill Cope, et al., “AI and the Future of Humanity: ChatGPT-4, Philosophy and Education—Critical Responses,” Educational Philosophy and Theory (June 1, 2023): 1–35, https://doi.org/10.1080/00131857.2023.2213437.

72.

Ibid.

Bibliography

Adiwardana, Daniel, Minh-Thang Luong, David R. So, Jamie Hall, Noah Fiedel, Romal Thoppilan, Zi Yang, et al.
“Towards a Human-like Open-Domain Chatbot.
” arXiv (27 February
2020
). https://doi.org/10.48550/arXiv.2001.09977.
Ali, Mohammad Javed, and Ali Djalilian.
“Readership Awareness Series—Paper 4: Chatbots and ChatGPT—Ethical Considerations in Scientific Publications.
Seminars in Ophthalmology
38
,
no. 5
(4 July
2023
):
403
4
. https://doi.org/10.1080/08820538.2023.2193444.
Amershi, Saleema, Dan Weld, Mihaela Vorvoreanu, Adam Fourney, Besmira Nushi, Penny Collisson, Jina Suh, et al.
“Guidelines for Human-AI Interaction,
” in
Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems
,
1
13
.
Glasgow, Scotland
:
ACM
,
2019
. https://doi.org/10.1145/3290605.3300233.
Aydın, Ömer, and Enis Karaarslan.
“Is ChatGPT Leading Generative AI? What Is beyond Expectations?
Academic Platform Journal of Engineering and Smart Systems
11
,
no. 3
(
2023
):
118
34
. https://doi.org/10.21541/apjess.1293702.
Bar-Gill, Oren.
“Algorithmic Price Discrimination: When Demand Is a Function of Both Preferences and (Mis)Perceptions.
SSRN Scholarly Paper
.
Rochester, NY
, 29 May
2018
. https://papers.ssrn.com/abstract=3184533.
Bernhardt, Mélanie, Charles Jones, and Ben Glocker.
“Potential Sources of Dataset Bias Complicate Investigation of Underdiagnosis by Machine Learning Algorithms.
Nature Medicine
28
,
no. 6
(June
2022
):
1157
58
. https://doi.org/10.1038/s41591-022-01846-8.
Bolukbasi, Tolga, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai.
“Man Is to Computer Programmer as Woman Is to Homemaker? Debiasing Word Embeddings.
” In
Advances in Neural Information Processing Systems
,
Vol. 29
. Curran Associates, Inc.,
2016
. https://proceedings.neurips.cc/paper_files/paper/2016/hash/a486cd07e4ac3d270571622f4f316ec5-Abstract.html.
Brown, Tom, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D. Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, and Amanda Askell.
“Language Models Are Few-Shot Learners.
Advances in Neural Information Processing Systems
33
(
2020
):
1877
901
.
Budhwar, Pawan, Soumyadeb Chowdhury, Geoffrey Wood, Herman Aguinis, Greg J. Bamber, Jose R. Beltran, Paul Boselie, et al.
“Human Resource Management in the Age of Generative Artificial Intelligence: Perspectives and Research Directions on ChatGPT.
Human Resource Management Journal
33
,
no. 3
(July
2023
):
606
59
. https://doi.org/10.1111/1748-8583.12524.
Chiara, Pier Giorgio.
“Italian DPA v. OpenAI’s chatGPT: The Reasons behind the Investigations and the Temporary Limitation to Processing.
Journal of Law and Technology
31
(
2023
):
2
.
Dash, Bibhu, and Pawankumar Sharma.
“Are ChatGPT and Deepfake Algorithms Endangering the Cybersecurity Industry? A Review.
International Journal of Engineering and Applied Sciences
10
,
no. 1
(
2023
):
21
39
.
De Angelis, Luigi, Francesco Baglivo, Guglielmo Arzilli, Gaetano Pierpaolo Privitera, Paolo Ferragina, Alberto Eugenio Tozzi, and Caterina Rizzo.
“ChatGPT and the Rise of Large Language Models: The New AI-Driven Infodemic Threat in Public Health.
Frontiers in Public Health
11
(25 April
2023
):
1166120
. https://doi.org/10.3389/fpubh.2023.1166120.
De Marcellis-Warin, Nathalie, Frédéric Marty, Eva Thelisson, and Thierry Warin.
“Artificial Intelligence and Consumer Manipulations: From Consumer’s Counter Algorithms to Firm’s Self-Regulation Tools.
AI and Ethics
2
,
no. 2
(May
2022
):
259
68
. https://doi.org/10.1007/s43681-022-00149-5.
Eben, Magali, Kristofer Erickson, Martin Kretschmer, Gabriele Cifrodelli, Zihao Li, Stefan Luca, Bartolomeo Meletti, and Philip Schlesinger.
“Priorities for Generative AI Regulation in the UK: CREATe Response to the Digital Regulation Cooperation Forum (DRCF).
Enlighten Publications
,
2023
. https://eprints.gla.ac.uk/306163/.
Erdélyi, Olivia J., and Judy Goldsmith.
“Regulating Artificial Intelligence: Proposal for a Global Solution.
” In
Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society
,
95
101
.
New Orleans, LA
:
ACM
,
2018
. https://doi.org/10.1145/3278721.3278731.
Eubanks, Virginia.
Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor
.
St. Martin’s Press
,
2018
.
Ezrachi, Ariel.
Virtual Competition: The Promise and Perils of the Algorithm-Driven Economy
.
Harvard University Press
,
2016
. https://doi.org/10.4159/9780674973336.
Felderer, Michael, Matthias Büchler, Martin Johns, Achim D. Brucker, Ruth Breu, and Alexander Pretschner.
“Security Testing: A Survey.
” In
Advances in Computers
, edited by Atif Memon,
Vol. 101
,
1
51
.
College Park, MD
:
Elsevier
,
2016
. https://www.sciencedirect.com/science/article/pii/S0065245815000649.
Felten, Ed, Manav Raj, and Robert Seamans.
“How Will Language Modelers like ChatGPT Affect Occupations and Industries?
arXiv
, 18 March 18,
2023
. http://arxiv.org/abs/2303.01157.
Franceschelli, Giorgio, and Mirco Musolesi.
“Copyright in Generative Deep Learning.
Data & Policy
4
(
2022
):
e17
. https://doi.org/10.1017/dap.2022.10.
Gal, Uri.
“ChatGPT Is a Data Privacy Nightmare. If You’ve Ever Posted Online, You Ought to Be Concerned.
The Conversation
,
2023
. https://www.sydney.edu.au/news-opinion/news/2023/02/08/chatgpt-is-a-data-privacy-nightmare.html
Gandhi, Pritish, and Vineet Talwar.
“Artificial Intelligence and ChatGPT in the Legal Context.
International Journal of Medical Sciences
10
(
2023
):
1
2
.
Gasser, Urs, and Virgilio A.F. Almeida.
“A Layered Model for AI Governance.
IEEE Internet Computing
21
,
no. 6
(
2017
):
58
62
.
Gillotte, Jessica L.
“Copyright Infringement in AI-Generated Artworks.
UC Davis Law Review
53
(
2019
):
2655
.
Goodfellow, Ian, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio.
“Generative Adversarial Nets.
Advances in Neural Information Processing Systems
27
(
2014
). https://proceedings.neurips.cc/paper/5423-generative-adversarial-nets.
Hacker, Philipp, Andreas Engel, and Marco Mauer.
“Regulating ChatGPT and Other Large Generative AI Models.
” In
2023 ACM Conference on Fairness, Accountability, and Transparency
,
1112
23
.
Chicago, IL
:
ACM
,
2023
. https://doi.org/10.1145/3593013.3594067.
Hadi, Muhammad Usman, Qasem Al Tashi, Rizwan Qureshi, Abbas Shah, Amgad Muneer, Muhammad Irfan, Anas Zafar, et al.
“Large Language Models: A Comprehensive Survey of Its Applications, Challenges, Limitations, and Future Prospects,
TechRxiv
, November 16,
2023
. https://doi.org/10.36227/techrxiv.23589741.v4.
Haleem, Abid, Mohd Javaid, and Ravi Pratap Singh.
“An Era of ChatGPT as a Significant Futuristic Support Tool: A Study on Features, Abilities, and Challenges.
BenchCouncil Transactions on Benchmarks, Standards and Evaluations
2
,
no. 4
(
2022
):
100089
.
Hillemann, D.
“Does ChatGPT Comply with EU GDPR Regulations? Investigating the Right to be Forgotten.
Downloaded
,
2023
. https://www.fieldfisher.com/en/insights/does-chatgpt-comply-with-eu-gdpr-regulations-inves
Houli, Daniel, Marie L. Radford, and Vivek K. Singh.
“‘COVID-19 Is_’: The Perpetuation of Coronavirus Conspiracy Theories via Google Autocomplete.
Proceedings of the Association for Information Science and Technology
58
,
no. 1
(October
2021
):
218
29
. https://doi.org/10.1002/pra2.450.
Jovanovic, Mladan, and Mark Campbell.
“Generative Artificial Intelligence: Trends and Prospects.
Computer
55
,
no. 10
(
2022
):
107
12
. https://doi.org/10.1109/MC.2022.3192720.
Karras, Tero, Samuli Laine, and Timo Aila.
“A Style-Based Generator Architecture for Generative Adversarial Networks.
” In
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition
,
4401
10
,
2019
. http://openaccess.thecvf.com/content_CVPR_2019/html/Karras_A_Style-Based_Generator_Architecture_for_Generative_Adversarial_Networks_CVPR_2019_paper.html.
Kingma, Diederik P., and Max Welling.
“Auto-Encoding Variational Bayes.
arXiv
, December 10,
2022
. http://arxiv.org/abs/1312.6114.
Lefstin, Jeffrey A., Peter S. Menell, and David O. Taylor.
“Final Report of the Berkeley Center for Law & Technology Section 101 Workshop: Addressing Patent Eligibility Challenges.
Berkeley Technology Law Journal
33
(
2018
):
551
.
Levantino, Francesco Paolo.
“Generative and AI-Powered Oracles: ‘What Will They Say about You?’
Computer Law & Security Review
51
(November 1,
2023
):
105898
. https://doi.org/10.1016/j.clsr.2023.105898.
Levy, Karen, and Bruce Schneier.
“Privacy Threats in Intimate Relationships.
Journal of Cybersecurity
6
,
no. 1
(1 January
2020
):
tyaa006
. https://doi.org/10.1093/cybsec/tyaa006.
Lim, Daryl.
“AI & IP: innovation & creativity in an age of accelerated change.
Akron L. Rev.
52
(
2018
):
813
.
Lucchi, Nicola.
“ChatGPT: A Case Study on Copyright Challenges for Generative Artificial Intelligence Systems.
European Journal of Risk Regulation
,
2023
,
1
23
. https://doi.org/10.1017/err.2023.59.
Lucic, Mario, Karol Kurach, Marcin Michalski, Sylvain Gelly, and Olivier Bousquet.
“Are Gans Created Equal? A Large-Scale Study.
Advances in Neural Information Processing Systems
31
(
2018
). https://proceedings.neurips.cc/paper/7350-are-gans-created-equal-a-large-scale-study.
Mannuru, Nishith Reddy, Sakib Shahriar, Zoë A Teel, Ting Wang, Brady D. Lund, Solomon Tijani, Chalermchai Oak Pohboon, et al.
“Artificial Intelligence in Developing Countries: The Impact of Generative Artificial Intelligence (AI) Technologies for Development.
Information Development
(September 14,
2023
). https://doi.org/10.1177/02666669231200628.
Mescheder, Lars, Andreas Geiger, and Sebastian Nowozin.
“Which Training Methods for Gans Do Actually Converge?
Proceedings of the 35th International Conference on Machine Learning
,
80
(
2018
):
3481
90
. https://proceedings.mlr.press/v80/mescheder18a.
Minssen, Timo, Effy Vayena, and I. Glenn Cohen.
“The Challenges for Regulating Medical Use of ChatGPT and Other Large Language Models.
JAMA
330
,
no. 4
(
2023
):
315
16
. https://doi.org/10.1001/jama.2023.9651.
Mittelstadt, Brent Daniel, Patrick Allo, Mariarosaria Taddeo, Sandra Wachter, and Luciano Floridi.
“The Ethics of Algorithms: Mapping the Debate.
Big Data & Society
3
,
no. 2
(December
2016
). https://doi.org/10.1177/2053951716679679.
Müller, Vincent C., and Nick Bostrom.
“Future Progress in Artificial Intelligence: A Survey of Expert Opinion.
” In
Fundamental Issues of Artificial Intelligence
, edited by Vincent C. Müller, 376: 555–72. Synthese Library.
Cham
:
Springer International Publishing
,
2016
. https://doi.org/10.1007/978-3-319-26485-1_33.
Olejnik, Lukasz.
“On the Governance of Privacy-Preserving Systems for the Web: Should Privacy Sandbox Be Governed?
” In
Handbook on the Politics and Governance of Big Data and Artificial Intelligence
,
279
314
.
Edward Elgar Publishing
,
2023
. https://www.elgaronline.com/edcollchap/book/9781800887374/book-part-9781800887374-22.xml.
Palaniappan, Rajkumar, Kenneth Sundaraj, and Sebastian Sundaraj.
“Artificial Intelligence Techniques Used in Respiratory Sound Analysis—A Systematic Review.
Biomedizinische Technik/Biomedical Engineering
59
,
no. 1
(January 1,
2014
). https://doi.org/10.1515/bmt-2013-0074.
Parikh, Nishant A.
“Empowering Business Transformation: The Positive Impact and Ethical Considerations of Generative AI in Software Product Management—A Systematic Literature Review.
” In
Transformational Interventions for Business, Technology, and Healthcare
, edited by Darrell Norman Burrell,
269
93
.
Hershey, PA
:
IGI Global
,
2023
. https://doi.org/10.4018/979-8-3693-1634-4.ch016.
Peters, Michael A., Liz Jackson, Marianna Papastephanou, Petar Jandrić, George Lazaroiu, Colin W. Evers, Bill Cope, et al.
“AI and the Future of Humanity: ChatGPT-4, Philosophy and Education—Critical Responses.
Educational Philosophy and Theory
(June 1,
2023
):
1
35
. https://doi.org/10.1080/00131857.2023.2213437.
Polkowski, Zdzislaw.
“The Method of Implementing the General Data Protection Regulation in Business and Administration.
” In
2018 10th International Conference on Electronics, Computers and Artificial Intelligence (ECAI)
,
1
6
. IEEE,
2018
. https://ieeexplore.ieee.org/abstract/document/8679062/.
Radford, Alec, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever.
“Language Models Are Unsupervised Multitask Learners.
OpenAI Blog
1
,
no. 8
(
2019
):
9
.
Rane, Nitin.
“ChatGPT and Similar Generative Artificial Intelligence (AI) for Smart Industry: Role, Challenges and Opportunities for Industry 4.0, Industry 5.0 and Society 5.0.
Challenges and Opportunities for Industry
4
(
2023
). https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4603234.
Ray, Partha Pratim.
“ChatGPT: A Comprehensive Review on Background, Applications, Key Challenges, Bias, Ethics, Limitations and Future Scope.
Internet of Things and Cyber-Physical Systems
3
(
2023
):
121
54
. https://www.sciencedirect.com/science/article/pii/S266734522300024X.
Roumeliotis, Konstantinos I., and Nikolaos D. Tselikas.
“ChatGPT and Open-AI Models: A Preliminary Review.
Future Internet
15
,
no. 6
(
2023
):
192
. https://doi.org/10.3390/fi15060192.
Rudin, Cynthia.
“Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead.
Nature Machine Intelligence
1
,
no. 5
(
2019
):
206
15
. https://doi.org/10.1038/s42256-019-0048-x.
Shokri, Reza, Marco Stronati, Congzheng Song, and Vitaly Shmatikov.
“Membership Inference Attacks against Machine Learning Models.
” In
2017 IEEE Symposium on Security and Privacy (SP)
,
3
18
.
San Jose, CA
,
2017
. https://doi.org/10.1109/SP.2017.41.
Smits, Jan, and Tijn Borghuis.
“Generative AI and Intellectual Property Rights.
” In
Law and Artificial Intelligence
, edited by Bart Custers and Eduard Fosch-Villaronga, 35:
323
44
. Information Technology and Law Series. The Hague: T.M.C. Asser Press,
2022
. https://doi.org/10.1007/978-94-6265-523-2_17.
Striuk, Oleksandr, Yuriy Kondratenko, Ievgen Sidenko, and Alla Vorobyova.
“Generative Adversarial Neural Network for Creating Photorealistic Images.
” In
2020 IEEE 2nd International Conference on Advanced Trends in Information Theory (ATIT)
,
368
71
. Kyiv, Ukraine,
2020
. https://doi.org/10.1109/ATIT50783.2020.9349326.
Taddeo, Mariarosaria, and Luciano Floridi.
“How AI Can Be a Force for Good.
Science
361
,
no. 6404
(August 24,
2018
):
751
52
. https://doi.org/10.1126/science.aat5991.
Trump, D.
“Maintaining American Leadership in Artificial Intelligence.
Federal Register
.
2019
. https://www.federalregister.gov/documents/2019/02/14/2019-02544/maintaining-american-leadership-in-artificial-intelligence.
Tull, Susan Y., and Paula E. Miller.
“Patenting Artificial Intelligence: Issues of Obviousness, Inventorship, and Patent Eligibility.
Journal of Robotics, Artificial Intelligence & Law
1
(
2018
):
313
.
Van Deursen, Alexander J.A.M., and Ellen J. Helsper.
“The Third-Level Digital Divide: Who Benefits Most from Being Online?
” In
Communication and Information Technologies Annual
,
10
:
29
52
. Emerald Group Publishing Limited,
2015
. https://www.emerald.com/insight/content/doi/10.1108/s2050-206020150000010002./full/html.
Veale, Michael, Max Van Kleek, and Reuben Binns.
“Fairness and Accountability Design Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making.
” In
Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems
,
1
14
.
Montreal, QC
:
ACM
,
2018
. https://doi.org/10.1145/3173574.3174014.
Venkatesh, Kesavan, Samantha M. Santomartino, Jeremias Sulam, and Paul H. Yi.
“Code and Data Sharing Practices in the Radiology Artificial Intelligence Literature: A Meta-Research Study.
Radiology: Artificial Intelligence
4
,
no. 5
(September 1,
2022
):
e220081
. https://doi.org/10.1148/ryai.220081.
Von Eschenbach, Warren J.
“Transparency and the Black Box Problem: Why We Do Not Trust AI.
Philosophy & Technology
34
,
no. 4
(December
2021
):
1607
22
. https://doi.org/10.1007/s13347-021-00477-0.
Vondrick, Carl, Hamed Pirsiavash, and Antonio Torralba.
“Generating Videos with Scene Dynamics.
Advances in Neural Information Processing Systems
29
(
2016
). https://proceedings.neurips.cc/paper/2016/hash/04025959b191f8f9de3f924f0940515f-Abstract.html.
Westerlund, Mika.
“The Emergence of Deepfake Technology: A Review.
Technology Innovation Management Review
9
,
no. 11
(
2019
). https://doi.org/10.22215/timreview/1282.
Witt, Laurel.
“Preventing the Rogue Bot Journalist: Protection from Non-Human Defamation.
Colorado Technology Law Journal
15
(
2016
):
517
.
Yanisky-Ravid, Shlomit.
“Generating Rembrandt: Artificial Intelligence, Copyright, and Accountability in the 3A Era: The Human-like Authors Are Already Here: A New Model.
Michigan State Law Review
(
2017
):
659
.
Zajko, M.
“Conservative AI and Social Inequality: Conceptualizing Alternatives to Bias through Social Theory.
AI & Society
36
(
2021
):
1047
56
. https://link.springer.com/article/10.1007/S00146-021-01153-9.
Zuboff, Shoshana.
“Surveillance Capitalism and the Challenge of Collective Action.
New Labor Forum
,
28
,
no. 1
, (
2019
):
10
29
. https://journals.sagepub.com/doi/full/10.1177/1095796018819461.
This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.