[ad_1]

Expanding Use of Generative AI Tools Raises Legal Questions

Potential Legal Copyright of Generative Content

Generative AI tools are being integrated into various platforms, becoming more widely used in different contexts. As a result, the question of the legal copyright over the usage of generative content has become a major concern. At present, brands and individuals can use generative AI content in any way that they choose. However, there are no legal provisions to specify direct liability. To determine who the “creator” of such content is in a legal context is complicated. Technically, the person who entered the query should be the “creator.” The US Copyright Office states that AI-generated images cannot be copyrighted at all, as an element of “human authorship” is needed for such provision. This absence of a “creator” raises legal complexities.

Calls for Generative AI Regulation as Existing Laws Fail

The legal provisions on the usage of generative AI tools stand as they are, although several artists are seeking changes to protect their copyrighted works. The highly litigious music industry is also calling for policy changes after an AI-generated track by Drake gained significant notoriety online. The National Music Publishers Association has written an open letter urging Congress to review the legal permission of AI models to train on human-created musical works. Given that the AI-generated track sounds like Drake and infringes on his copyright, this issue resonates with all brands and artists, and new laws are required to protect their rights. At present, the legal system has not caught up with the use of generative AI tools, and there is no clear basis to prevent people from creating and profiting from AI-generated works, regardless of their originality.

Misinformation and Misunderstanding with Generative AI Images

Undoubtedly, the growing possibility of misinformation and misunderstanding associated with generative AI tools is another matter of concern. There have been several instances where AI-generated visuals have been so convincing that they have caused confusion and even impacted stock prices. An AI-generated „Pope in a puffer jacket“ had many people questioning its authenticity, while an AI-generated image of an explosion outside the Pentagon sparked a brief panic, which was later clarified. The fear is that as these tools get better at mimicking human creation, we will soon be unable to distinguish between what is real and what is not, and the lines of creative capacity will become blurred.

Microsoft Uses Cryptographic Watermarks to Add Transparency to AI-Generated Images

Microsoft has partnered with OpenAI and is seeking to integrate OpenAI’s systems into all of its apps. In partnership with The Coalition for Content Provenance and Authority (C2PA), Microsoft is adding cryptographic watermarks to all of the images generated by its AI tools. This step aims to add an extra level of transparency to AI-generated images and ensure that all generated elements have these watermarks built into their metadata. In this way, viewers will have a means to confirm whether any image is real or AI-created. However, the watermarking process can be negated by using screenshots or other means that strip the core data coding, and the systems in place are not foolproof. The legal basis to enforce infringement within generative AI images is also lacking, even with the markers in place.

Legal Usage of Generative AI Content

At present, individuals and brands can use generative AI content legally for personal or business purposes. However, one needs to tread carefully when using celebrity likenesses. It is impossible to know how this will change in the future with the absence of definitive legal recourse. For example, the recent fake Ryan Reynolds ad for Tesla (which is not an official Tesla promotion) has been pulled from its original online source. Even though creating AI content is legal and celebrity likenesses can be replicated, there are lines being drawn, and provisions are being set in place. For backgrounds, minor elements, or content that is not obviously derivative of an artist’s work, generative AI content can be used legally within business content. The same counts for text, but one must verify the accuracy of the content with extreme caution.

Sources:

– https://www.socialmediatoday.com/news/legal-experts-call-for-generative-ai-regulation-as-existing-laws-fail-to/597625/
– https://aws.amazon.com/blogs/machine-learning/copyright-and-ai-a-maze-of-challenges/
– https://www.ipwatchdog.com/2021/03/11/copyright-ai-algorithms-artificial-creativity/id=130536/
– https://techcrunch.com/2021/11/22/microsoft-is-adding-cryptographic-watermarks-to-ai-generated-images/

Quelle