Critics also point out that emotion-detecting systems can be biased and unreliable, with a 2020 MIT study showing that face-analyzing models could develop unintended preferences for certain expressions. Despite Google's claims of extensive testing and low levels of bias in PaliGemma 2, the company has not provided full details of the benchmarks used. Concerns have been raised about the potential misuse of such technology, particularly in high-risk contexts such as law enforcement, human resourcing, and border governance.
Key takeaways:
- Google has announced its new AI model family, PaliGemma 2, which has the ability to 'identify' emotions by analyzing images and generating contextually relevant captions.
- Experts have expressed concerns about the potential misuse of this openly available emotion detector, highlighting the complexities and subjectivity of interpreting emotions.
- Google claims to have conducted extensive testing to evaluate demographic biases in PaliGemma 2, but has not disclosed full details of the benchmarks used or the types of tests performed.
- There are concerns that such emotion detection systems could be used to discriminate against marginalized groups in areas such as law enforcement, human resourcing, and border governance.