18.5 C
Los Angeles
Friday, September 13, 2024

Panasonic Boosts High-Capacity EV Battery Manufacturing

Panasonic Energy, a key supplier to electric...

China Soars with Cargo Drones and Air Taxis

China is rapidly transforming its skies into...

The Hunt for Ultralight Dark Matter: Sifting Through the Cosmic Shadow

Dark matter, the enigmatic substance that dominates...

Google Pauses Gemini AI Image Generator Over Historical Inaccuracies

TechnologyGoogle Pauses Gemini AI Image Generator Over Historical Inaccuracies

After a backlash over historically inaccurate depictions, Google is scrubbing its new image-generating feature from its Gemini Artificial Intelligence chatbot. The tech giant on Thursday temporarily halted the feature after users publicized examples of racially diverse Nazi-era German soldiers, Black Vikings, and women as popes, among other historically strange images that were generated from text-to-image prompts. In a post on X, Google said it would refocus efforts to ensure that the machine learning model “accounts for gender and racial representation in its training data” while striving for accuracy.

The image-generating feature is one of the newest features in the revamped Gemini that launched this month, and it is meant to respond to specific prompts with images of people or objects. But it quickly drew criticism on social media and from the right, with some accusing Google of being “laughably woke” in its pursuit of diversity to the detriment of truth and accuracy.

In response to the uproar, Google apologized and said it would refocus efforts to account for gender and racial representation in the machine learning model’s training data while striving for accuracy. In a post on X, Google product lead Jack Krawczyk addressed the issue on Wednesday, saying that while Gemini typically offers a “wide range of people,” it was missing the mark with some historical image generation depictions.

While apologizing for the errors, Krawczyk defended the machine learning model’s capabilities and said it was trying to compensate for racial and gender bias in its training data. He said that if given enough information about a person, such as their name, occupation, or age, it could accurately portray them in an image. However, he admitted that he was still learning to do this and that the process would take time.

Users earlier this week posted screenshots on social media showing how the chatbot was inaccurately depicting scenes with racially diverse characters in historical contexts, including an image of a German-Jewish soldier and a scene of four Swedish women from WWII. They also highlighted instances where the AI produced non-white people when asked to depict the U.S. Founding Fathers or Nazi-era German soldiers. This portrayal was met with derision on social media and the right.

The blunder comes when Google aggressively drives business through its AI services, including augmented reality and facial recognition technology. Its new X division is also working to develop autonomous cars, drones, and other high-tech gadgets. The Alphabet subsidiary also seeks to boost advertising and corporate partnership revenue through its AI search engine, Assistant.

Check out our other content

Check out other tags:

Most Popular Articles