The FTC's investigation revealed that Rite Aid had secretly introduced facial recognition systems in about 200 U.S. stores over an eight-year period starting in 2012, primarily in lower-income, non-white neighborhoods. The company, in collaboration with two contracted firms, created a "watchlist database" of customers it claimed had engaged in criminal activity at its stores. The FTC also found that Rite Aid failed to inform customers about the use of facial recognition technology and instructed employees not to disclose this information. The company disagreed with the allegations but expressed satisfaction with reaching an agreement with the FTC.
Key takeaways:
- Rite Aid has been banned from using facial recognition software for five years by the Federal Trade Commission (FTC) due to its "reckless use of facial surveillance systems" that put customers' sensitive information at risk.
- The FTC's order also requires Rite Aid to delete any images collected as part of its facial recognition system rollout and implement a robust data security program to protect any personal data it collects.
- Rite Aid had secretly introduced facial recognition systems across some 200 U.S. stores over an eight-year period, with lower-income, non-white neighborhoods serving as the technology testbed. The FTC alleges that Rite Aid created a "watchlist database" of customers it claimed had engaged in criminal activity, leading to false positives and customer harassment.
- The FTC's findings also highlight inherent biases in AI systems, noting that Rite Aid's technology was more likely to generate false positives in stores located in plurality-Black and Asian communities than in plurality-White communities. The company also failed to test or measure the accuracy of their facial recognition system before or after deployment.