A Tough Pill for Rite Aid to Swallow

The FTC isn't Happy

Rite aid pharmacy store

Imagine going to your local pharmacy to pick up a few things. After you pay for your items, you head towards the door, but you are greeted by a member of staff telling you that they believe you shoplifted in the past. How do they know this? Well, the facial recognition feature told them. The issue is that you are a law-abiding citizen and have never stolen from this pharmacy or any other pharmacy, for that matter. 

This may seem like a crazy hypothetical, but according to the Federal Trade Commission (FTC), this is precisely what was happening with Rite Aid. 

From 2012 to 2020, Rite Aid staff was notified by their facial recognition software about potential shoplifters and would follow them around the store, search them, and attempt to kick them out of the store.  Hell, they would even call the cops on the “persons of interest.” 

The part that is disappointing but not that shocking is that this disproportionately impacted people of colour and women. This has long been a fear in the computer science field. Depending on the data set that is used, the AI could and often does pick up on biases. This leads to false positive identifications and can lead to major issues. 

In one instance, an 11-year-old girl was stopped and searched due to a false positive. Imagine leaving a store with your child and being stopped and searched because a computer program got something wrong. Insanity. 

The FTC claimed in a nearly 54 page document that the company collected over ten thousand images, often low quality, to help build the system to identify persons of interest. But not only were customers not being told about the store using this technology - employees were discouraged from revaling it or even talking about it. 

Samuel Levine, the director of the FTC’s Bureau of Consumer Protection,  went on record saying: “ Rite Aid's reckless use of facial surveillance systems left its customers facing humiliation and other harms, and its order violations put consumers’ sensitive information at risk.” 

In a statement from Rite Aid, they revealed that while they disagree with the accusations levied at them, they are “pleased to reach an agreement.”. Although not all details are currently known, we know that per the agreement, Rite Aid has been barred from using AI technology as a surveillance tool for five years! We can also assume that they will be fairly protected due to the bankruptcy protection they filed for back in October. 

By the time Rite Aid gets its permission to use the technology again, we can only assume that it will be even better than what we currently have. That’s how AI works. It is only ever getting better.

But this story should be a firm reminder that just because it is getting better doesn’t mean it is infallible. A computer program with a racial bias is a prime example of how humans leave their fingerprints all over everything AI.

Join the conversation

or to participate.