However, Meta's track record with responsible AI has been criticized, with accusations of bias in its algorithms and unethical practices in its AI ethics team. The company claims FACET is more thorough than previous benchmarks, but it's unclear how the annotators were recruited and compensated. Despite potential issues with its origins, Meta says FACET can be used to probe various models across different demographic attributes. The company applied FACET to its own DINOv2 computer vision algorithm, uncovering several biases.
Key takeaways:
- Meta has released a new AI benchmark, FACET, designed to evaluate the fairness of AI models that classify and detect things in photos and videos. It is made up of 32,000 images containing 50,000 people labeled by human annotators.
- FACET is designed to account for classes related to occupations and activities, demographic and physical attributes, allowing for deep evaluations of biases against those classes.
- Despite Meta's claims that FACET is more thorough than previous computer vision bias benchmarks, the company has faced criticism for its track record in responsible AI, including accusations of bias against certain demographic groups.
- Meta has also made available a web-based data set explorer tool. However, developers must agree not to train computer vision models on FACET, they can only evaluate, test and benchmark them.