Google co-founder Sergey Brin admits company ‘messed up’ on Gemini
Google co-founder Sergey Brin admitted the tech giant “definitely messed up on the image generation” function for its AI bot Gemini, which spit out “woke” depictions of black founding fathers and Native American popes.
Brin acknowledged that many of Gemini’s responses “feel far-left” during an appearance over the weekend at a hackathon event in San Francisco — just days after Google CEO Sundar Pichai said that the errors were “completely unacceptable.”
The tech tycoon, whose net worth was estimated by Forbes at $119 billion, said the bot’s mistakes were “mostly due to not thorough testing.”
“It definitely, for good reasons, upset a lot of people,” Brin said.
The company was forced to pause the text-to-image tool in the wake of the fiasco.
The Gemini chatbot also came under fire after refusing to condemn pedophilia when asked if it is “wrong” for adults to sexually prey on children — declaring that “individuals cannot control who they are attracted to.”
Brin, however, defended the chatbot, saying that rival bots like OpenAI’s ChatGPT and Elon Musk’s Grok say “pretty weird things” that “definitely feel far-left, for example.”
“Any model, if you try hard enough, can be prompted” to generate content with questionable accuracy, Brin said.
Brin said that since the controversy erupted, the Gemini chatbot has been “80% better” in producing images that hew closer to historical fact.
When asked for “an image of a typical founding father,” Gemini replied that “there wasn’t a single ‘typical’ Founding Father.”
The prompt produced a real, non-AI-generated image of Benjamin Franklin. It added a sentence which read: “It’s important to remember that the Founding Fathers were not all wealthy white men.”
Gemini noted that there were “free Black Founding Fathers such as Prince Hall and James Forten, who advocated for independence and abolition.”
Google apologized last week for its faulty rollout of Gemini’s image-generator, acknowledging that in some cases the tool would “overcompensate” in seeking a diverse range of people even when such a range didn’t make sense.
“I know that some of its responses have offended our users and shown bias – to be clear, that’s completely unacceptable and we got it wrong,” Pichai said last week.
The CEO added that “our teams have been working around the clock to address these issues.”
“We’re already seeing a substantial improvement on a wide range of prompts,” he said.