Tech

Google pauses ‘absurdly woke’ Gemini AI chatbot’s image tool after backlash over historically inaccurate pictures

Google said Thursday it would “pause” its Gemini chatbot’s image generation tool after it was widely panned for creating “diverse” images that were not historically or factually accurate — such as black Vikings, female popes and Native Americans among the Founding Fathers.

Social media users had blasted Gemini as “absurdly woke” and “unusable” after requests to generate representative images for subjects resulted in the bizarrely revisionist pictures.

“We’re already working to address recent issues with Gemini’s image generation feature,” Google said in a statement posted on X. “While we do this, we’re going to pause the image generation of people and will re-release an improved version soon.”

Examples included an AI image of a black man who appeared to represent George Washington, complete with a white powdered wig and Continental Army uniform, and a Southeast Asian woman dressed in papal attire even though all 266 popes throughout history have been white men.

One social media user blasted the Gemini tool as “unusable.” Google Gemini

In another shocking example uncovered by the Verge, Gemini even generated “diverse” representations of Nazi-era German soldiers, including an Asian woman and a black man decked out in 1943 military garb.

Since Google has not published the parameters that govern the Gemini chatbot’s behavior, it is difficult to get a clear explanation of why the software was inventing diverse versions of historical figures and events.

William A. Jacobson, a Cornell University Law professor and founder of the Equal Protection Project, a watchdog group told The Post: “In the name of anti-bias, actual bias is being built into the systems.”

“This is a concern not just for search results, but real-world applications where ‘bias free’ algorithm testing actually is building bias into the system by targeting end results that amount to quotas.”

The problem may come down to Google’s “training process” for the “large-language model” that powers Gemini’s image tool, according to Fabio Motoki, a lecturer at the UK’s University of East Anglia who co-authored a paper last year that found a noticeable left-leaning bias in ChatGPT.

“Remember that reinforcement learning from human feedback (RLHF) is about people telling the model what is better and what is worse, in practice shaping its ‘reward’ function – technically, its loss function,” Motoki told The Post. 

“So, depending on which people Google is recruiting, or which instructions Google is giving them, it could lead to this problem.”

It was a significant misstep for search giant, which had just rebranded its main AI chatbot from Bard earlier this month and introduced heavily touted new features — including image generation.

Google Gemini was mocked online for producing “woke” versions of historical figures. Google Gemini

The blunder also came days after OpenAI, which operates the popular ChatGPT, introduced a new AI tool called Sora that creates videos based on users’ text prompts.

Google had earlier admitted that the chatbot’s erratic behavior needed to be fixed.

“We’re working to improve these kinds of depictions immediately,” Jack Krawczyk, Google’s senior director of product management for Gemini experiences, told The Post.

“Gemini’s AI image generation does generate a wide range of people. And that’s generally a good thing because people around the world use it. But it’s missing the mark here.”

The Post has reached out to Google for further comment.

When asked by The Post to provide its trust and safety guidelines, Gemini acknowledged that they were not “publicly disclosed due to technical complexities and intellectual property considerations.”.

Google has not published the parameters that govern Gemini’s behavior. Google Gemini

The chatbot in its responses to prompts also had admitted it was aware of “criticisms that Gemini might have prioritized forced diversity in its image generation, leading to historically inaccurate portrayals.”

“The algorithms behind image generation models are complex and still under development,” Gemini said. “They may struggle to understand the nuances of historical context and cultural representation, leading to inaccurate outputs.”

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button