Google’s Gemini Crisis Underscores Necessity of Human Editors for AI

As if Artificial Intelligence (AI) hasn’t already generated enough controversy around governance issues to ensure AI remains safe and ethical, factually incorrect hallucinations and multiple lawsuits against OpenAI, now Google’s Gemini crisis has raised a new set of red flags and has heightened the need for human editors of AI.

Gemini used AI to automatically craft news summaries, but it faced a host of challenges in maintaining accuracy and avoiding biased summaries. This led to Google’s decision to shut down Gemini, with it being called unreliable by the company that created it.

Google was hammered for Gemini’s failure to generate accurate images of white people. It portrayed the country’s Founding Fathers as non-white and inaccurately portrayed the races of Google’s co-founders. While Google’s “woke” problem with Gemini garnered initial media attention, later critics pointed out similar issues with Gemini’s text responses.  

As reported by Peter Kafka in Business Insider, tech analyst Ben Thompson noted that Gemini has, among other things: struggled to say whether Hitler’s actions or Elon Musk’s tweets have been worse for society and said it wouldn’t promote meat or fossil fuels. 

Sundar Pichai, Google’s chief executive officer, told employees in an internal memo that these historically inaccurate images and text have “offended our users and shown bias.” This memo was obtained by The Vergeand and was first reported by Semafor.

Google has since apologized for “missing the mark” and said it’s “been working around the clock” to address “problematic text and image responses in the Gemini app.” 

‘No AI Is Perfect’

“No Al is perfect,” he said, “especially at this emerging stage of the industry’s development, but we know the bar is high for us and we will keep at it for however long it takes.”

Fallout from this crisis includes the latest news that Google may lay off 10 employees from its 250-member trust and safety team while asking other members of the group to be on standby in case further issues arise in Gemini, according to Bloomberg as reported by The Hindustan Times. This team is tasked with setting up rules on AI products to minimize “bad actors” that could potentially manipulate the tools. It also conducts risk evaluations to ensure the tools are safe for Google’s global user base.

We even have lawmakers starting to react to Google’s Gemini Crisis. One Republican senator actually went so far as to call for the breakup of one of the most well-known, important and profitable tech companies.

“This is one of the most dangerous companies in the world,” said Senator J.D. Vance, R-Ohio, during an interview with FOX Business’ Maria Bartiromo on “Sunday Morning Futures.” “It actively solicits and forces left-wing bias down the throats of the American nation.”

The Impact of Bias and GIGO

While the company has fixed some of the offensive and inaccurate responses to some of these queries or addressed them in other ways, this situation has exposed how AI responses can be affected by any kind of bias in the large language models that are its knowledge base. Every AI model needs to overcome the biases that exist on the broader internet, which fuels its AI generation tool, without going too far in the other direction. They need to avoid falling prey to the tried-and-true computer science maxim of GIGO — garbage in, garbage out. In other words, flawed, biased or poor quality (“garbage”) information or input will produce flawed, biased or poor quality (“garbage”) output.

This recent news is part of the continuing debate about the reliability and ethical uses of generative AI. The continued unpredictability of AI generated responses to queries – many of which may not be so obvious – highlights the need for human oversight of AI generated content. While generative AI models have proven that they offer tremendous potential for content creation, they must be accompanied by human AI content editors to ensure responsible and trustworthy outputs. 

By collaborating hand in hand with generative AI, human editors can harness the power of this technology and alleviate potential risks while augmenting and amplifying human creativity. That’s what the role of generative AI should be – to help us do our jobs as content strategists and content creators better. It will help ensure the highest quality content that captures the interests of our intended audiences and moves them to act. 

The human touch is essential to maximizing the power of generative AI. By striking a balance between AI assistance and the human touch, content creators can ensure that the finished content is factually correct, original, relevant to its key audiences, contains a personal point of view and adheres to ethical standards. Whether working in public relations, digital marketing or web content development, only human editors and writers can make this possible. Remember, we are always writing for humans.

Interested in finding out how Writing For Humans™ can help you avoid mistakes like Google’s and instead ensure that the most clear, concise and compelling content reaches your key audiences — and moves them to act? Contact me at randy@writingforhumans.co for a free, no obligations consultation on leveraging this powerful technology.

###

Posted by

in

One response to “Google’s Gemini Crisis Underscores Necessity of Human Editors for AI”

  1. Randy Savicky Avatar
    Randy Savicky

    What are some other examples that could further illustrate the importance of human oversight in AI-generated content, and how might we implement practical solutions for this collaboration?