Rite Aid deployed facial recognition in hundreds of stores, report finds
The surveillance fell disproportionately on customers in lower-income areas.
Pharmacy chain Rite Aid deployed facial recognition technology in hundreds of store locations in the nation’s largest cities, particularly in low-income neighborhoods predominantly home to people of color, a new report has found.
Reuters today published an in-depth report citing internal documents, interviews with more than 40 sources familiar with the systems, and first-hand observation of cameras in stores, which found the technology was deployed in at least 200 stores, including 75 identified in New York and Los Angeles.
Whenever a customer entered a store that uses the tech, their image was logged in a database. On return visits, the software added new images to existing customer profiles. It then ran those images against a list of “people Rite Aid previously observed engaging in potential criminal activity.” When the software made a match, store security employees received a smartphone push notification.
Rite Aid declined to identify which store locations used the technology, but Reuters journalists in Manhattan and Los Angeles were able to spot 33 in 75 stores.
Among those 75 stores, Reuters found, stores in poorer areas were significantly more likely than stores in higher-income areas to have facial recognition in use—68 percent of the stores Reuters visited in lower-income areas had it, as compared to 25 percent of the stores in wealthy neighborhoods. In areas where Black or Latinx residents made up the largest demographic group, Rite Aid locations were more than three times as likely to be using facial recognition as in predominantly white neighborhoods.
One of the tools Rite Aid reportedly used was DeepCam, which is connected to Chinese state investment funds. Senator Marco Rubio (R-Fla.) told Reuters the link was “outrageous,” adding, “China’s efforts to export its surveillance state to collect data in America would be an unacceptable, serious threat.”
Rite Aid confirmed the existence of the program to Reuters in February and, at the time, defended the use of the technology. A week before Reuters published its story, though, Rite Aid apparently said it no longer used the software and that the cameras themselves had been turned off.
“This decision was in part based on a larger industry conversation,” a Rite Aid representative told Reuters. “Other large technology companies seem to be scaling back or rethinking their efforts around facial recognition given increasing uncertainty around the technology’s utility.”
That “larger industry conversation”
Facial recognition technology has notorious disparities in effectiveness depending on who it’s trying to identify. By and large, the algorithms in use work better on men and light-skinned people than they do on women and dark-skinned people. Adding masks to the mix, now that we are in the midst of a pandemic, makes facial recognition systems work even more poorly.
A Rite Aid employee based in Detroit, the population of which is more than 75 percent Black, told Reuters bluntly that the software the company started out using, “doesn’t pick up Black people well.” The loss-prevention staffer added, “If your eyes are the same way, or if you’re wearing your headband like another person is wearing a headband, you’re going to get a hit.”
The American Civil Liberties Union recently filed a complaint against the Detroit police on behalf of a Michigan man who was arrested in January based on a false positive match generated by facial recognition software. In light of the complaint, Detroit’s police chief admitted, “If we were just to use the technology by itself, to identify someone, I would say 96 percent of the time it would misidentify.”
The disparate impact of facial recognition on Black individuals has become part of the discussion in the midst of the nationwide protests against police brutality and overreach and in support of Black communities. IBM in June walked away from the facial recognition business, with CEO Arvind Krishna saying, “vendors and users of AI systems have a shared responsibility to ensure that AI is tested for bias, particularly when used in law enforcement, and that such bias testing is audited and reported.” Days later, Amazon, too, put a one-year moratorium on allowing police to use its facial recognition platform, Rekognition.
Read the original article over at ArsTechnica.com