The move comes amid debates about the use of generative AI tools, which often use publicly available content for training without crediting or compensating the original creators. Kickstarter's new policy aims to increase transparency and trust, but it could prove contentious, especially as some AI vendors are reluctant to disclose their training data sources. The policy will be enforced through a new set of questions for project submissions and a standard human moderation process.
Key takeaways:
- Kickstarter has announced a new policy requiring projects using AI tools to generate content to disclose relevant details about their use of AI, including how it will be used in their work and which elements will be created using AI tools.
- Projects involving the development of AI tech must also provide information about the sources of training data they intend to use, including how these sources handle consent and credit.
- The new policy will come into effect on August 29, but will not be retroactively enforced for projects submitted prior to that date.
- Kickstarter's move towards this policy has been gradual, with the platform previously banning a project that used AI to generate art without safety filters and a project that used AI to plagiarize an original comic book.