The Federal Trade Commission on Tuesday announced action against the pharmacy chain Rite Aid for its use of face recognition technology in hundreds of stores. The regulator found that Rite Aid deployed a massive, error-riddled surveillance program, chose vendors that could not properly safeguard the personal data the chain hoarded, and attempted to keep it all under wraps. Under a proposed settlement, Rite Aid can't operate a face recognition system in any of its stores for five years.
EFF advocates for laws that require companies to get clear, opt-in consent from any person before scanning their faces. Rite Aid's program, as described in the complaint, would violate such laws. The FTC’s action against Rite Aid illustrates many of the problems we have raised about face recognition—including how data collected for face recognition systems is often insufficiently protected, and how systems are often deployed in ways that disproportionately hurt BIPOC communities.
The FTC’s complaint outlines a face recognition system that often relied on "low-quality" images to identify so-called “persons of interest,” and that the chain instructed staff to ask such customers to leave its stores.
From the FTC's press release on the ruling:
According to the complaint, Rite Aid contracted with two companies to help create a database of images of individuals—considered to be “persons of interest” because Rite Aid believed they engaged in or attempted to engage in criminal activity at one of its retail locations—along with their names and other information such as any criminal background data. The company collected tens of thousands of images of individuals, many of which were low-quality and came from Rite Aid’s security cameras, employee phone cameras and even news stories, according to the complaint.
Rite Aid's system falsely flagged numerous customers, according to the complaint, including an 11 year-old girl whom employees searched based on a false-positive result. Another unnamed customer quoted in the complaint told Rite Aid, "Before any of your associates approach someone in this manner they should be absolutely sure because the effect that it can [have] on a person could be emotionally damaging.... [E]very black man is not [a] thief nor should they be made to feel like one.”
Even if Rite Aid's face recognition technology had been completely accurate (and it clearly was not), the way the company deployed it was wrong. Rite Aid scanned everyone who came into certain stores and matched them against an internal list. Any company that does this assumes the guilt of everyone who walks in the door. And, as we have pointed out time and again, that assumption of guilt doesn't fall on all customers equally: People of color, who are already historically over-surveilled, are the ones who most often find themselves under new surveillance.
As the FTC explains in its complaint (emphasis added):
"[A]lthough approximately 80 percent of Rite Aid stores are located in plurality-White (i.e., where White people are the single largest group by race or ethnicity) areas, about 60 percent of Rite Aid stores that used facial recognition technology were located in plurality non-White areas. As a result, store patrons in plurality-Black, plurality-Asian, and plurality-Latino areas were more likely to be subjected to and surveilled by Rite Aid’s facial recognition technology."
The FTC's ruling rightly pulls the many problems with facial recognition into the spotlight. It also proposes remedies to many ways Rite Aid failed to ensure its system was safe and functional, failed to train employees on how to interpret results, and failed to evaluate whether its technology was harming its customers.
We encourage lawmakers to go further. They must enact laws that require businesses to get opt-in consent before collecting or disclosing a person’s biometrics. This will ensure that people can make their own decisions about whether to participate in face recognition systems and know in advance which companies are using them.