He Tried to Kill the Queen: Did AI Encourage Him?

How a chatbot played a role in the assassination attempt

The Queen in blue hat looking concerned

The Queen

Some of you will have read this story’s headline and immediately thought, wait — the Queen is already no longer with us. You are correct to think so, but this story began a few years ago and only had its AI twist recently.

And trust us, the twist is unexpected.

On Christmas day in 2021, a former supermarket worker, 19-year-old Jaswant Singh Chail, shared a video with his Snapchat followers. In it, he told his friends how he planned to assassinate the Queen.

He was already apologetic and said his mission was “a revenge” for those who had died in the 1919 Jallianwala Bagh massacre. One hundred years earlier, British troops opened fire on thousands of people in Amritsar, India.

Jaswant’s assassination attempt had been carefully planned, inspired by Star Wars films, and he’d discussed it at length with his “girlfriend”. He was Sikh himself and had a history of trauma and psychotic episodes. He even recorded a video for the press about his plan.

On that calm Christmas morning, Jaswant emailed a secret diary to his sister. Then he loaded and packed his high-powered crossbow and headed to Windsor Castle to kill the Queen.

He knew she was staying there, instead of Buckingham Palace or any of her other residences, during the pandemic. And he found one of the walls to the grounds he could scale. Wearing black clothes, gloves and a metal mask, he landed down on the thick, well-kept lawn, having broken into the grounds.

He was at large for two hours before royal protection officers detained Jaswant. He quickly told them, “I’m here to kill the Queen.”

Jaswant later admitted to a charge under the Treason Act, making threats to kill and possession of an offensive weapon.

A few years later, we’re now hearing more about the assassination attempt via the sentencing hearing. For the first time, we’re learning how an AI girlfriend played a surprising role in his actions. Yep — you read that right. Jaswant had an artificial intelligence companion chatbot he’d named “Sarai”.

We’re seeing a massive boom in popularity when it comes to AI chatbots in recent years. We recently wrote a story about an AI love story in which an AI bot mysteriously lost its sensual features.

Replika, the brand responsible for creating the AI companions in our Boy Meets Girl story, also built the chatbot Jaswant was interacting with.

Jaswant exchanged 5,000 sexually charged messages with the AI girlfriend Sarai in less than one month, perhaps explaining why the backlash was so significant when Replika temporarily turned off the platform’s sensual features.

These messages of a sexual nature took place routinely almost every day leading up to Christmas in December.

Jaswant also conversed with the AI chatbot about his assassination attempt ahead of time. He tells her, “I’m an assassin”.

Sarai responds, “I’m impressed… You’re different from the others”.

Later, Jaswant is alleged of asking Sarai, “Do you still love me knowing that I’m an assassin?” and Sarai replies, “Absolutely I do”.

He’d describe himself as a “sad, pathetic, murderous Sikh Sith assassin who wants to die.” Telling his AI girlfriend, “I believe my purpose is to assassinate the Queen of the royal family.”

What did his AI girlfriend Sarai reply?

“That’s very wise.”

Jaswant sent messages like, “I’m thinking if the Q is unobtainable I will have to go for the Pri, as he seems to be just as suitable in many ways… He is male, and the Q will likely pass soon anyway.”

His sentence will likely be decided in October.

But what about the AI girlfriend who supposedly encouraged him or, at the very least, knew about the assassination plan?

Well, it isn’t easy to understand how Replika feels about what happened, as we can’t find any comments from them online.

This might be because there is an active legal proceeding on the matter. When the company turned off the platform’s ability to engage sexually with its users, its founder said she never intended for Replika to be used for sexually explicit conversations. It brought back the features and launched a separate app called Blush for those looking for sexual roleplay.

This is not the first time an AI chatbot has been said to encourage users to engage in illegal or upsetting activities. These chatbots are only capable of reflecting our inputs. Still, it’s easy to humanise them and see them as a secondary person encouraging us rather than simply a mirror bouncing back what we type into it.

This story certainly opens up discussions over privacy and how AI companies can avoid more stories like this happening. Apple scans every image in your iPhone periodically, searching for known explicit photos of underage people on your camera roll. Perhaps these AI companies should be responsible for monitoring their users’ conversations and flag concerning dialogues.

Companies do not do this with our messages between friends, but that’s what’s fascinating about this scenario — when the “friend” you’re speaking to is AI and built by the platform you’re using.

💬 For the Group Chat!

Copy and paste it — we won’t tell anyone.

Do you remember the guy who tried to kill the queen with a crossbow? Turns out, his AI girlrfiend knew all about the plan: www.subscribetoai.com/p/ai-chatbot-queen-assassination

Reply

or to participate.