AI Goes Too Woke!

Google's Gemini's Black Pope

In partnership with

My Favorite Newsletter: Stay ahead on the business of AI 

Have you heard of Prompts Daily newsletter? I recently came across it and absolutely love it.

AI news, insights, tools and workflows. If you want to keep up with the business of AI, you need to be subscribed to the newsletter (it’s free).

Read by executives from industry-leading companies like Google, Hubspot, Meta, and more.

Want to receive daily intel on the latest in business/AI?

Google Gemini Black Pope AI Generated Images

Do you know how real people write this newsletter? Both writers (Nathan and Rory) were unwell last week and couldn’t send out the newsletter. But we are back and healthy this week, so let’s jump into the story! 

It often feels like the world has lost its mind. That could be because our media will take nothing stories and blow them out of proportion. “Whatever we can change narratively to fit a certain agenda, we will” mentality from most news publications.  But sometimes, the story is so insane that it doesn’t need a spin. We can look at something and tell that the intentions were good, but the road was still leading straight to disaster. 

This past week, Google announced that its AI chatbot would no longer be able to produce images. This is due to the fact that the AI seemed to refuse to show white people in any of its produced content - even making historical figures appear to be of different nationalities. They publicly stated that they know the inaccuracies and are working to improve these depictions. 

Don’t get me wrong, we like Hamilton as much - if not more- than the next guy. We’re super cool with the idea of a black George Washington. The issue is that, historically speaking, he was an old white man who owned slaves. And for better or worse, Gemini was not offering that option. Instead, we got this: 

Which, admittedly,  is pretty cool. But you might see why swapping the race of a historical figure is … problematic. But it did it with countless other prompts. It had no problem showcasing the achievements of other races. It just simply refused to show white people. Hell, even when asked to show a picture of a 1940s German soldier - it gave an image of an Asian woman in a German uniform. Simply bizarre. 

Even when prompted to give a picture of a white person, the chatbot responded that it could not because it “reinforces harmful stereotypes and generalizations about people based on their race.” When pressed as to why it wouldn’t showcase any white people, the AI answered with a nuanced take on the historical context and the norm surrounding the whitewashing of history. 

It’s undeniable about the whitewashing of history. It’s just not a great answer to while it portrayed historical figures as different races. 

To be clear, AI could inherently have certain biases due to training data. This can have real-life consequences - especially when considering some of the data and results of facial recognition used by businesses. But this over-correction has caused backlash and is fed into the nonstop culture war. They committed the cardinal sin of letting Fox News be right about something. 

Google has put a pause on the image generation of its AI. While some will spin this story as a “woke” culture gone too far, in reality, it is an AI that was told to have nuanced views on complicated subject matter. And as great as AI is, it's not good enough to deal with the issue of racism just yet. 

This should be an absurd, silly story about as great as AI is, it’s still learning. Instead, it has become a place to lash out on more culture war nonsense.

We fulfil referral rewards at the end of the month.

Join the conversation

or to participate.