Google's Gemini faced backlash for producing inaccurate and biased images when asked to depict historical scenarios. The company has not provided a timeline for when it will adjust its service to account for historical context or restore the ability to generate images of people. Meanwhile, lawsuits are pending against leading image and text generators over the terms and compensation related to the use of artists' and media organizations' content. The outcomes of these lawsuits could significantly impact the future of generative AI tools.
Key takeaways:
- Google has paused some features of its AI image generator Gemini due to backlash over its depiction of gender and ethnic diversity. Other competitors in the space include OpenAI, Microsoft, and Adobe.
- Generative AI tools, which are trained on vast datasets, have been criticized for reflecting the biases within that data, such as ethnic and gender biases.
- There are concerns that as content becomes increasingly detailed and realistic, it could be used to create deepfakes, spread dangerous misinformation, or damaging material. Companies are working on tools like watermarks to help distinguish real from fake content.
- There are ongoing legal issues regarding the ownership of the data used to train these AI models, with many leading image and text generators facing lawsuits from artists and media organizations over the use of their content.