The article also touches on the debate around whether training AI models on copyrighted material is fair use, and whether the output of these models is transformative rather than a direct copy of the original works. It mentions lawsuits against AI companies like OpenAI and Midjourney for alleged plagiarism of intellectual property. The article concludes by questioning the fairness of copyright laws that may hinder small startups from competing with Big Tech, and the commercial viability of synthetic music considering the unresolved legal gray area around copyrighting AI content.
Key takeaways:
- AI models are increasingly being used to generate music, but this has led to legal challenges due to copyright laws. Record labels are particularly litigious, and AI developers could face lawsuits if they use copyrighted music without permission.
- Some AI developers may choose to train their models on music they have created or commissioned, or have permission to use, to avoid potential legal battles. However, it remains to be seen how these models will compare to those trained on a wider set of audio.
- AI makers generally argue that training their models on copyrighted material is fair use and that the output of these models is transformative, not a direct copy of the original works. However, not everyone is convinced by these arguments.
- The threat of lawsuits means that those working to build models capable of generating music must have deep pockets to fend off music publishers, or compensate artists for explicit permission to use their work. This raises questions about whether copyright laws are impeding small startups from competing against Big Tech, and how musicians and developers can work together to advance AI ethically.