When it comes to legal frontiers we’re currently embarking toward with respect to generative artificial intelligence (AI) and its role in invention and creation, I find myself both intrigued and cautious. The potential of this technology to revolutionize industries and foster innovation is undeniable. Yet, the legal challenges it presents are equally formidable, particularly at the intersection of AI and intellectual property.

The core legal implications of AI, particularly concerning copyright and patent laws, represent a significant paradigm shift. The assumption that a creator must be human is now being challenged by the capabilities of AI, raising questions about the extent of AI’s involvement in the creative process and the rights associated with such works. Similarly, patent law grapples with the role of AI in the invention process, as current regulations require an inventor to be a human being.

The development of generative AI systems introduces additional complexities, with recent lawsuits highlighting issues of infringement and data security. The use of proprietary material in training AI systems and the potential exposure of confidential information pose significant risks.

As we navigate this uncharted territory, the need for clear guidelines, new legal frameworks, and collaboration between legal experts, developers, and policymakers is evident. While the legal landscape around AI is riddled with uncertainty, I remain hopeful about the potential of this technology to be a positive force in the world. The journey ahead is challenging, but with ongoing dialogue and innovation, my hope is that we can harness the benefits of AI while ensuring fairness, respect, and accountability.

AI and Copyright Infringement

As generative AI technologies enter the market, their use raises significant legal implications under existing copyright laws. Courts are currently navigating how to apply these laws to AI-generated content, dealing with issues such as infringement, rights of use, and the murky waters of ownership of AI-generated works. A key question is whether users should be able to prompt AI tools with direct references to other creators’ copyrighted and trademarked works without their permission.

The legal system is being tasked with defining the scope of what constitutes a “derivative work” under intellectual property laws. The interpretation of this term can vary depending on the jurisdiction, leading to different rulings by federal circuit courts. The outcomes of these cases are likely to hinge on the interpretation of the fair use doctrine, which permits the use of copyrighted work without the owner’s permission for purposes such as criticism, commentary, news reporting, teaching, scholarship, or research, as well as for transformative uses that repurpose the copyrighted material in a manner not originally intended.

The intersection of technology and copyright law is not a new phenomenon. A notable example is Authors Guild v. Google, Inc., where Google’s successful defense argued that transformative use allowed for the scraping of text from books to create its search engine. This decision remains a precedent in the legal landscape.

Doe et al. v. GitHub, Inc. et al.

In a first-of-its-kind class action lawsuit, Microsoft, its subsidiary GitHub, and its business partner OpenAI have been accused of engaging in “software piracy on an unprecedented scale” through their AI-powered coding assistant, GitHub Copilot. This case, filed in November 2022, has the potential to significantly impact the broader world of AI, particularly in how companies utilize copyright-protected data to train their software.

GitHub Copilot is trained on public repositories of code scraped from the web. The plaintiffs argue that by training their AI systems on public GitHub repositories, the defendants have infringed upon the legal rights of numerous creators who posted code under specific open-source licenses

These licenses, including the MIT license, the GPL, and the Apache license, all mandate the attribution of the author’s name and copyright. The lawsuit contends that, in addition to violating these attribution requirements, the defendants have also breached GitHub’s own terms of service and privacy policies, violated DMCA § 1202 by removing copyright-management information, infringed upon the California Consumer Privacy Act, and committed other related legal violations.

Andersen v. Stability AI et al.

In January 2023, three graphic artists initiated a class-action lawsuit against multiple generative AI platforms. The crux of the lawsuit is the allegation that these platforms have used the artists’ original works without proper licensing to train their AI systems in their unique styles. This has enabled users to generate artworks that are potentially too similar to the existing, protected works of the artists, thereby creating unauthorized derivative works.

The lawsuit specifically states that the defendants have utilized copies of the training images to produce digital images and other outputs that are derived solely from these training images, without adding anything new. The plaintiffs claim this “unlawful appropriation” has devalued and diluted the worth of their art in a market now flooded with similar-looking AI-generated images.

A similar lawsuit has also been filed against Stability AI by Getty Images, a major stock image database company. The lawsuit accuses Stability AI of misusing over 12 million Getty photos to train its Stable Diffusion AI image-generation system.

If the courts determine that the works produced by the AI are unauthorized and derivative, significant infringement penalties could be imposed. Further, the ramifications for the operations of the AI companies themselves could be far-reaching and potentially existential.

Silverman et al., v. OpenAI Inc et al.

Comedian and author Sarah Silverman has become part of a class-action lawsuit against OpenAI and another lawsuit against Meta, alleging copyright infringement by these companies. The lawsuits claim that these companies have “copied and ingested” her protected work to train their artificial intelligence programs.

The lawsuit against Meta specifically references the company’s research paper on “LLaMA,” its large-language model used for training chatbots. The plaintiffs argue that their copyrighted materials were used as part of this training process, with many of the plaintiffs’ books appearing in dataset culled from so-called “shadow libraries” that Meta has acknowledged using.

The legal proceedings have encountered some challenges. Most of Silverman’s ancillary claims were dismissed, although the direct copyright infringement claim has progressed to the discovery phase. On February 12, 2024, further stumbling blocks emerged with the dismissal of additional claims, including allegations of vicarious copyright infringement, violations of the Digital Millennium Copyright Act, negligence, and unjust enrichment. However, the judge allowed the unfair competition claim to proceed, noting that it could be preempted by the federal Copyright Act which prohibits state law claims that allege the same violation as a copyright claim.

Artificial Intelligence and Patent Law: AI Ownership

While AI’s involvement in creating inventions is less controversial than its role in generating artistic works, the notion of AI being credited as the inventor of something poses significant legal challenges.

The case of Stephen Thaler and his AI system, DABUS, has brought this issue to the forefront. Thaler claims that DABUS is responsible for inventing two items: a fractal-shaped food container and a flashing emergency beacon. Thaler has sought patents explicitly naming the AI system as the inventor. However, authorities in the U.K., Europe, the U.S., Australia, and New Zealand have denied his applications. The U.S. Court of Appeals for the Federal Circuit ruled in Thaler v. Vidal that AI cannot be named as an inventor on a patent because the Patent Act requires inventors to be “natural persons.” Despite these setbacks, Thaler achieved a notable victory in South Africa where the first-ever patent was granted to an AI for the fractal food container.

The debate over AI-generated inventions is not just a philosophical one about human versus machine intelligence. It has practical implications for the future of innovation and global competitiveness. Legal scholars and computer scientists are exploring ways to adapt the patent system to accommodate AI-generated inventions. Toby Walsh and Alexandra George suggest creating a new category called “AI-IP” for AI-generated inventions, with shorter patent durations and possible shares for AI model developers or training data owners.

The U.S. Senate held a hearing on AI and patents, featuring representatives from technology and pharmaceutical companies, as well as Dr. Ryan Abbott, who founded the Artificial Inventor Project. This group seeks legal protection for AI-generated inventions and argues that rapidly advancing AI, particularly generative AI, is more than just a tool. He claims AI has the potential to be capable of producing unscripted, creative results akin to those of a person.

Risks to Data Privacy From Artificial Intelligence

Generative AI models rely heavily on vast amounts of data for training. These datasets can include personal or private information of individuals or companies, including proprietary or trade secret information. This dependency on data raises questions about how AI systems collect, process, and utilize information, potentially leading to infringements or misappropriation. One of the primary risks associated with AI and data privacy is the potential for data leaks. Any breach or unauthorized access to these AI systems could result in the exposure of sensitive information. 

False claims and misinformation are another concern in the realm of AI and data privacy. For instance, an Australian mayor filed a defamation lawsuit against ChatGPT, alleging that the AI chatbot falsely claimed he had served time in prison for bribery. This incident highlights the potential for AI-generated content to spread inaccurate information, which can have serious repercussions for individuals’ reputations and privacy.

The use of demographic data to train AI models raises both ethical concerns and issues around liability. The “garbage-in-garbage-out” model applies here: an AI tool trained biased data may learn to stereotype or discriminate, leading to biased decision-making. This bias could manifest in various ways that present risks for those employing the tools, such as in loan approval processes, hiring practices, or targeted advertising. The use of prejudiced AI, even inadvertently, could expose its users to risks around discrimination or defamation, and could have a damaging effect on their brand and public image.

Regulations like the General Data Protection Regulation (GDPR) in the European Union and the California Consumer Privacy Act (CCPA) in the United States aim to address these privacy concerns by providing guidelines for data collection, processing, and protection. These regulations are designed to safeguard individuals’ privacy rights and ensure that personal data is handled responsibly. However, the pervasive nature of AI poses challenges to the effective implementation of these regulations. AI’s ability to process and analyze data on a large scale makes it difficult to fully control and monitor, leading to potential gaps in privacy protection.

Potential Solutions and the Road Ahead

Given that we are still in the nascent stages of AI-generated content, unraveling the legal intricacies around IP created by or with AI will prove to be an ongoing and complex process that will go on for many, many years to come. As is true with the development of any broad legal superstructures, the full scope of precedent surrounding AI and ownership will become clearer as more cases are brought and as we delve deeper into the subtleties of the technology. As this legal landscape surrounding artificial intelligence continues to evolve, however, several potential solution frameworks emerge to address the challenges posed by generative AI.

  1. Clear Guidelines for Responsible Development and Use: Both Congress and the courts will need to work to establish clear guidelines that take into consideration ethics, data privacy, intellectual property rights, and transparency in AI decision-making processes. Significant legislation will be required to bring these standards into practice.
  2. Collaboration Between Stakeholders: Ongoing collaboration between legal experts, developers, and policymakers is crucial to ensure that laws and regulations keep pace with technological advancements. A multidisciplinary approach where private industry, legislative bodies, and other institutions like universities and the military could facilitate a comprehensive understanding of the implications of AI and foster the development of balanced and effective legal frameworks.
  3. Exploring Novel Legal Frameworks: The unique nature of AI-generated creations may necessitate the exploration of new legal frameworks specifically tailored to address issues such as copyright, patentability, and liability. These frameworks should account for the autonomous capabilities of AI and the blurring lines between human and machine creativity. As previously mentioned, some have suggested a new classification of patent, for instance, that specifically addresses the unique legal questions brought about by the use of AI in invention.
  4. Redefining Inventorship: The question of whether AI can be considered an inventor challenges traditional notions of inventorship and patent law, and opens the door to redefining the legal classification and understanding of “inventor.” As the technology progresses and becomes even more ubiquitous, legal systems will need to adapt to accommodate AI-generated inventions, possibly by legally recognizing AI as a tool used by human inventors or by creating a new category of inventorship for AI-assisted creations.

As AI technologies continue to advance and permeate various sectors, it is imperative to engage in open and ongoing discussions and research to ensure that the development and use of AI aligns with societal values and legal principles. The goal should be to harness the benefits of AI while mitigating its risks and ensuring fairness, accountability, and respect for human rights.

Mitigate AI Legal Issues with Training and Alignment

Specialized IP counsel can play a crucial role in guiding technology companies as we take these steps into an AI-adapted innovation space. These legal experts are invaluable resources for training inventors and other innovation stakeholders in navigating the dynamic and mutable world of IP law as it pertains to AI. This new paradigm necessitates that companies adopt proactive measures to safeguard their interests in both the short and long term.


AI developers must ensure compliance with the law concerning their acquisition of data used to train their models. This involves following any new regulations surrounding licensing and compensating IP owners for the data added to their training datasets. 

As of now, it is at least partly the responsibility of customers of AI tools to inquire whether the models were trained with protected content, review terms of service and privacy policies, and avoid tools that cannot confirm proper licensing of training data, it is likely that new legal precedents will shift more of the responsibility to protect IP rights to developers.

One solution developers can employ now is to prioritize transparency by maintaining the provenance of AI-generated content. This includes recording details about the development platform, settings used, seed-data metadata, and tags for AI reporting. Such audit trails can protect business users from IP infringement claims and demonstrate that outputs were not created with the intent to copy or steal.


Individual content creators and brands should proactively protect their IP portfolios. This includes monitoring digital and social channels for works derived from their own and using search tools to automate the examination of large-scale datasets. For brands with valuable trademarks, monitoring should evolve to examine the style of derivative works, as stylistic elements may suggest misuse of a brand’s content. IP legal counsel can perform landscape monitoring regularly to help creators keep a watchful eye on their proprietary brand assets.

On the other side of this coin, creators should work toward their own framework of working with AI-generated material to help protect themselves against claims of infringement, plagiarism, or misappropriation. Maintaining a clear understanding of what outputs qualify as “fair use” or sufficiently “transformative” will be crucial as they enter the murky world of AI-generated creative work. 

Finally, creators will need to properly establish ownership over any creative work generated with the help of AI, especially if they intend to profit from that work. Working with IP counsel can help creators in all these areas to stay ahead of this shifting legal territory and ensure they remain on the right side of the law.


Companies should regularly evaluate contractual language and terms to make sure issues around IP ownership with respect to AI are clearly addressed, and that guidelines for the use of AI as a tool for innovation is transparent. This includes demanding confirmation of proper licensure of training data from AI platforms and broad indemnification for potential IP infringement. Vendor and customer agreements should include AI-related language to ensure that IP rights are protected and that all parties support the registration of authorship and ownership of AI-generated works.

As the legal landscape continues to shift, keeping legal counsel informed about the use of generative AI is essential. Organizations should consider creating generative AI checklists for contract modifications to reduce unintended risks of use. By taking these proactive steps, innovation companies can navigate the complexities of AI and IP law, helping to guarantee that their inventions are legally protected and ethically developed.

AI & Intellectual Property: Final Thoughts

The legal issues surrounding AI are complex and evolving. The question of whether and how AI-generated inventions should be protected will have significant implications for the future of innovation.

As AI technologies become more embedded in our lives, the need for updated laws and regulations becomes increasingly apparent. However, the slow pace of legislative processes poses challenges. It’s imperative to find solutions that ensure the responsible development and use of AI, balancing innovation with legal and ethical considerations. 

Despite the hurdles, I believe the potential for AI to drive positive change remains vast. I maintain a hopeful outlook towards resolving these legal dilemmas, and opening new avenues for innovation to make the world a better place for all of us.

Michael Dilworth

This article is for informational purposes, is not intended to constitute legal advice, and may be considered advertising under applicable state laws. The opinions expressed in this article are those of the author only and are not necessarily shared by Dilworth IP, its other attorneys, agents, or staff, or its clients.