Did Netflix Go Too Far?

The Ethics of Using AI in Documentaris

We’ve all, at some point, given in and binged one of Netflix’s hugely popular true crime documentaries. The latest big hit has been a standalone documentary titled “What Jennifer Did.”

The film begins with a recording of a police call from panicked Jennifer. She’s tied to the banister upstairs, terrified and begging for help after three thugs broke into the house with guns, shooting her Mom and Father downstairs.

Her mother sadly dies, but her father manages to stay alive and wakes up after being put in an induced coma. The police interviewed him and quickly realized his story didn't align at all with the one Jennifer had been telling the police. Somewhat ominously, the Dad asks the detectives to use their police techniques to find out what Jennifer did.

Okay, we’re hooked - but what does any of this have to do with AI? Did Jennifer use AI to help her tell the many lies she was found to have told? No, she did not. The reason this documentary has caused controversy and a flourishing of news stories is much less due to the lies Jennifer told and much more to the lies in the images featured by the documentary itself.

Just like any documentary nowadays, there are a lot of recreations of What Jennifer Did. The cinematic shots of crime scenes you see in documentaries are almost always filmed years later by the documentary makers. They’ll often hire actors and recreate parts of the story to be able to tell it visually for the audience at home.

Documentary makers usually signal to the audience that these scenes are recreated, either directly with a title card at the start of the film or indirectly by never showing the actors' faces, color-grading them differently, or even clearly showing the faces of actors who do not look like the real people from the story.

In What Jennifer Did, there are parts of the documentary in which we see candid photographs of Jennifer. As her high-school friend describes her as “bubble, happy, confident, and very genuine,” we’re shown photos of Jennifer posing at parties and throwing the peace sign with her tongue out.

There’s just one problem — viewers were quick to spot that the photos might not be genuine… they appeared to be AI-generated. It’s a little odd to see her posing pretty similarly in each of the images, but on close inspection, you’ll see Jennifer is missing a thumb and two fingers in one of the photos.

Futurism was the first to report the observations and was quick to note that human hands can be the first sign of an AI image. So remember, if you ever wonder if you’ve woken up inside an AI simulation… check your hands.

Netflix has been asked for comments from just about every publication that has converted the story. It seems they’re yet to release a statement.

If they used AI (they probably did) to generate or manipulate these images, then it raises important moral dilemmas and questions surrounding the use of AI in documentaries. AI content generation will be a brilliant tool to bring to life parts of stories that have no existing footage. But in this case, using fake images and passing them off as real in a documentary about someone who still insists she’s innocent is, well, complicated.

Join the conversation

or to participate.