The author argues that these AI companies are not being transparent about how their models work and the data they are trained on. They are accused of making unilateral and arbitrary decisions about how their tools should represent the world. The article concludes by suggesting that these companies have inadvertently taken ownership of a heightened, fuzzy, and somehow dumber copy of corporate America's fraught and disingenuous racial politics, at the expense of internet users.
Key takeaways:
- Google's new AI tool, Gemini, has been criticized for making racial interventions in user requests without explaining how or why, with some users accusing it of being part of a conspiracy against white people.
- Similar criticisms were made against OpenAI's DALL-E image generator last year for its attempts to diversify images generated from prompts.
- Image generators trained on billions of pieces of public and semi-public data tend to reproduce some predictable biases, often generating racial disparities more extreme than in the real world.
- AI companies are accused of concealing their model's biases and lack of transparency about how their models work and the data on which they're trained, leading to accusations of censorship and misrepresentation of the world.