July 18, 2024

Rights Holder and Revenue Share: Analysis on Generative AI Regulation in the US

Rights Holder and Revenue Share: Analysis on Generative AI Regulation in the US

The EU AI act was officially published on the 12th of July, 2024. This means the countdown for compliance has begun. These regulatory discussions are occurring in various jurisdictions with equal significance to AI's future and responsible training data. The focus is broadly centered around copyright and licensing of content and data. Meanwhile, the music industry has been front and center in advocating for solutions and regulations that protect the copyright of its artists and their works.

The US as a jurisdiction is significant because some of the largest AI companies and some of the most considerable copyright lawsuits are centered in its legal system. Three regulations have come to light in contributing to the regulatory discussion surrounding copyright and training data. First, is the ELVIS Act, which was signed into Tennessee law which updates personal likeness rights to include voice. Second, is the proposal of the COPIED Act which formalises the requirement for provenance and gives rights holders legal avenues to take control of their property. Finally, the Generative AI Copyright Disclosure Act 2024 proposes a push for copyrighted inventory transparency from AI developers and deployers. Each of these regulations will be briefly discussed weighing up their pros, cons, and impact. Finally, the solutions that will facilitate and support these regulations will be briefly examined.

Elvis Act

On March 21, 2024, the Ensuring Likeness Voice and Image Security Act (ELVIS Act) was signed into law by the Governor of Tennessee Bill Lee. The purpose of the act is to legally protect musicians from unauthorised usage of their likeness in deepfakes generated using AI tools.  The law achieves this by prohibiting people using AI to mimic a person’s voice without their permission. Violations can be criminally enforced as a class A misdemeanour which can potentially land an offender a year in prison or a fine or $2500 in the state of Tennessee. This comes among a rise in the accessibility of said AI tools to recreate the likeness of an Artist for works they had never been involved with. One example that is continually mentioned in relation to this case is of the anonymous TikTok user ‘Ghostwriter977’, who recreated the voices of Drake and The Weeknd in a song amassing over 11 million views without ever involving either party in the creation of this work. This example shows very clearly how developed and accessible these AI tools have become. Another example brings to light the controversy of OpenAI coming very close to mimicking the voice of Scarlett Johansson from the movie ‘her’. Both examples clearly delineate the need to protect personal likeness against a new suite of generative AI cloning models and tools that enable robust human impersonation.

The ELVIS act has some strengths to its design:

  • The Act provides robust protection for the likeness and image rights of individuals, especially public figures such as musicians and artists. This helps prevent unauthorised commercial use of an individual's likeness, ensuring that rights holders can control and monetise their image.
  • The act offers clear legal recourse for rights holders, letting them act against unauthorised use. It sets a strong legal precedent for protecting intellectual property rights, deterring potential violators.
  • The Act clearly defines what constitutes the misuse of likeness, providing a precise legal framework. This clarity helps both rights holder and potential users understand their boundaries and obligations.

The weaknesses of the Act:

  • The broad scope of the act might lead to excessive restrictions, that could potentially stifle creativity and innovation. Particularly it could result in legal challenges that are costly and time consuming for rights holders and accused violators.
  • The act itself will negatively impact AI development. By making it harder to access resources such as likeness and images, smaller and less funded AI developers will be hindered in the development processes. However, this does fall into the broader debate of open and closed source development, and collective vs. Collected intelligent.
  • Enforcing the Act across different jurisdictions, especially with the global nature of the internet and digital will be the biggest challenge and limitation for this act. It is only legally applicable in Tennessee. This makes its symbolism incredibly strong, but ultimately makes it meaningless outside of state and national borders.

To summarise, the ELVIS act directly regulates using someone's likeness in AI generated materials. It gives rights holders clear legal recourse for violators, but the act falls short because it is only applicable within the state of Tennessee but could serve as an strong example of rights protections moving forward.

COPIED ACT

Moving up to the federal level, the Content Origin Protection and Integrity form Edited and Deepfaked Media Act (COPIED Act), takes a slightly different approach to ELVIS. COPIED seeks to protect artists, songwriters and journalists from having their content used to train AI models or generate AI content without their consent. This is distinctly different from the ELVIS act because it is targeting a rights holder's works and not their likeness. COPIED **sets out to achieve this by making it easier to identify AI generated content and combat the rise of harmful deepfakes or reproduced works by enforcing mandatory content provenance information of pre-training data that cannot be removed. The bill itself “requires developers and deployers of AI systems and publications used to generate covered content (any digital representation of someone or somethings work) to give users the option to attach content provenance information within 2 years”. The act prohibits any removal or tampering of this content provenance information, with a limited exception for security research purposes. It also calls upon the National Institute of Standards and Technology (NIST) to create guidelines and standards for content provenance information, watermarking and synthetic content detection.

The Act has some tangible strengths:

  • Attaching provenance information to covered content will create a lineage of the data sources and show proper attribution to the generated content. This will be massive for any accreditation or revenue sharing mechanisms further down the line.
  • Clear delineation of rights management by making provenance information tamper-proof brings greater power to rights holders that have claimed the intellectual contribution to a work.
  • Third, it prohibits the use of any content that contains provenance information to be trained on any AI or algorithm-based system to create synthetic content without the expressed, informed consent and adherence to the terms of use of such content, including compensation”. This provides a welcome contribution to the modernisation of IP laws to address the challenges posed by AI technologies.

However, the act itself does seem to have some significant gaps:

  • Edward Lee of chatgptiseatingtheworld, a website dedicated to tracking all lawsuits and regulatory updated to generative AI, states that, “The bill is proposed to exist outside of the copyright system-and apparently not be subject to fair use or other exceptions to copyright, such as idea-expression”. For the time being this does not necessarily hazard as a major concern, but as the Gen AI tools become cheaper and more accessible to developers outside of resource heavy startups then a question of fair use through transformation or idea expression might prove to be a conversation or amendment that is needed moving forward.
  • There is a risk of overregulation because of the the barriers being put in place for innovation and the significant compliance costs on small creators and startups.
  • The Act will struggle with the complexities of AI generated content, especially in distinguishing between human and AI contributions to creative works. Legal ambiguities regarding ownership and rights of AI-generated works could lead to potential disputes and litigation regardless of what provenance information is available.

In Summary, the act is a strong attempt to modernise IP protection and enforcement through provenance information. However, the bill seems to have been developed outside of the traditional and developed copyright laws which could prove difficult moving into the future regarding attribution of AI generated content.

Generative AI Copyright Disclosure Act of 2024

The generative AI copyright disclosure Act of 2024 is brutally simplistic in its design but is incredibly effective in its intended output. This act “Would require a notice to be submitted to the register of Copyrights to the release of a new generative AI system with regard to all copyrighted works used in building or altering the training dataset for that system”. Future models that are set to be released publicly will need to submit a list of their copyrighted works 30 days before the models are made publicly available. Existing models copyrighted works must be filed by 30 days after the Act goes into effect. The Copyright Register will establish and maintain a publicly available online database that contains each notice.

The strengths of this act are as follows:

  • There is very clear transparency and accountability with the onus pushed on to the AI companies to be accountable. The public register of submissions would be a powerful enforcement tool particularly if it could be audited by any individual.
  • Rights holders will have another developed framework in which to pursue legal action. This ensures they receive proper recognition and compensation if their work is used in AI training datasets.
  • The Act itself is already based within existing copyright legal infrastructure. This makes fine-tuning the act and the process far simpler and easier than what is proposed in COPIED Act.

Weaknesses:

  • This act will be initially hard to implement due to determining the extent of AI’s involvement in creating content and enforcing disclosure requirements.
  • Like COPIED, the Act might fall short of the complexities related to ownership and liability of AI generated content, leading to potential legal disputes.

In Summary, the Generative AI Disclosure Act 2024 forces AI developers to be transparent about what works are going into their systems and use’s existing legal infrastructure to do so. However, the complexity of rights management in generative AI might require some fine-tuning of the act or the definitions of what constitutes copyrighted works that have been used.

If we zoom out and look at the ELVIS, COPIED acts, all three of the acts support each other in a complimentary fashion in attempting to put the power back into the hands of the rights holders. ELVIS specifically targets the reproduction of an individual's likeness, COPIED allows rights holders to provide provenance information to their works, and Generative AI Disclosure Act 2024 forces AI developers to be transparent. Covering all three of these bases creates a very strong foundation for the future of rights holders in generative AI, but it is abundantly clear that there will be a period of fine-tuning and redrafting regulation to make it efficient. It would seem across all the Acts the biggest winners are the enterprise level creative work conglomerates (Record Labels, Art and Image Conglomerates) and the biggest losers will be SME and low budget AI developers that will struggle with innovation or will be hindered by compliance.

So what do these regulations tell us about the solutions that are going to be used in the near future?

Provenance graphs illustrate the supply chain of training data from source to end user, promoting accountability within the data supply chain by ensuring transparency from both data owners and consumers. This accountability incentivises data owners to clearly disclose their content for licensing and allows AI developers to understand their training inputs and outcomes, ensuring rights holders are adequately compensated. Platforms like Valyu’s exchange offer provenance for datasets, supporting copyright and transparency regulations.

AI copyright regulation globally is still in an incredibly juvenile stage and it is expected that there will be redrafting and fine-tuning of most laws as best practices emerge for each use case and industry. For the time being both rights holder and developers should be watching the regulation closely to understand how it can effect that and understand what solutions are being built to assist their pain points. For now, the future is looking promising.

Email Icon - Elements Webflow Library - BRIX Templates

Subscribe to our email newsletter!

Check - Elements Webflow Library - BRIX Templates
Thanks for joining our newsletter
Oops! Something went wrong while submitting the form.