Rite Aid had initiated its AI-based facial recognition technology to identify customers with problematic behavior, but the FTC alleged that the company did not take reasonable measures to prevent harm to customers who were falsely identified and accused of shoplifting. As part of the settlement, Rite Aid will have to implement an information security system and safeguards against similar harm if it uses similar automated systems in the future. The company will also have to discontinue the technology if it cannot control potential risks to consumers.
Key takeaways:
- The Federal Trade Commission (FTC) has banned Rite Aid from using facial recognition technology for surveillance purposes for five years due to the company's alleged failure to prevent harm to consumers and have reasonable procedures in place.
- Rite Aid's use of the technology allegedly resulted in false tagging of consumers, often women and people of color, as shoplifters, leading to instances of public humiliation and police involvement.
- As part of the settlement, Rite Aid will have to implement an information security system and safeguards against similar harm if it uses similar automated systems in the future.
- From 2012 to 2020, Rite Aid's system generated thousands of false positive matches using low-quality photos, sometimes flagging the same person at dozens of different stores across the country.