Expanding Use of Generative AI Tools Raises Questions of Legal Copyright

As the usage of generative AI tools continues to grow and get integrated into various ad creation platforms, the question of legal copyright over the usage of generative content is becoming more important than ever. There are concerns about how generative AI tools can be used without violating existing copyright laws, and whether the current legal frameworks are up to the task. In this article, we take a closer look at these issues and explore what experts are saying about the need for generative AI regulation.

Legal Ownership of Generative AI Content

At present, there is no clear legal way to assign ownership over generative AI content. This is because the creator of the content is technically the person who entered the query into the generative AI tool. However, even this is not entirely clear, as the legal status of AI-generated images is still being debated.

The US Copyright Office has stated that AI-generated images cannot be copyrighted as an element of „human authorship“ is required for such provision. This has created a legal minefield, as there may be no designated „creator“ in this sense. As a result, artists and companies are seeking changes to protect their copyrighted works uses AI-generated content.

For example, the National Music Publishers Association recently issued an open letter urging Congress to review the legality of allowing AI models to train on human-created musical works. The letter was prompted by an AI-generated track by Drake that became popular online, sounding like Drake’s distinctive voice and utilizing his particular style, which may have infringed on Drake’s copyright.

Misinformation and Confusion Sparked by Generative AI Tools

Aside from copyright concerns, the increasing quality of AI-generated images is also causing concerns about misinformation and confusion. In several cases, AI-generated visuals have been so convincing that they sparked confusion and even had an impact on stock prices.

For example, an AI-generated image of the „Pope in a puffer jacket“ had many questioning its authenticity. More recently, an AI-generated image of an explosion outside the Pentagon sparked a brief panic before clarification that it wasn’t a real event.

The concern is that as these AI tools get better at replicating human creation, we soon won’t be able to tell what’s real and authentic and what’s not. In response, Microsoft is looking to add cryptographic watermarks on all of the images generated by its AI tools to ensure a level of transparency and clarify the authenticity of such images.

The Future of Generative AI Regulation

At present, there is no definitive legal instrument to stop people from creating and profiting from AI-generated works, even derivative ones. Although Microsoft is looking to address the issue through cryptographic watermarks, it is hard to ensure absolute detection and identification of generative AI images.

In terms of usage, businesses are free to use generative AI content for personal or business reasons. However, it’s important to tread carefully when using celebrity likenesses or other copyrighted works. The music industry has already taken note of the potential issues with generative AI, and rules may be drawn up soon to restrict what can be done with generative AI tools in this respect.


The use of generative AI tools is expanding and raising important questions about copyright and legal ownership. At present, there is no definitive legal instrument to stop people from creating and profiting from AI-generated works, no matter how derivative they might be. As generative AI continues to advance, and AI-generated content becomes more and more realistic, it’s clear that new regulatory frameworks will need to be developed to ensure the creation, use, and distribution of AI-generated content does not infringe on existing rights and perpetuate misunderstandings.