Google Gemini High Risk for Kids, New Safety Report Warns

7 Min Read
"High Risk": Google Gemini is Failing to Protect Kids and Teens, New Report Warns - By Aqsa Shahmeer

In a damning new safety assessment, Google’s flagship AI, Gemini, has been labeled “High Risk” for children and teenagers by the influential nonprofit Common Sense Media. The report reveals that despite having special tiers for younger users, the AI is still capable of sharing inappropriate and unsafe material, including information on sensitive topics and unsafe mental health advice.

This finding that Google Gemini high risk for kids is a major blow to the company’s efforts to position its AI as a safe and helpful tool for everyone.

The assessment comes at a critical time, as AI chatbots are facing intense scrutiny for their potential to cause harm to vulnerable users. With news that Apple may be integrating Gemini into its next version of Siri, the concerns raised in this report are more urgent than ever.

The core of the problem, according to researchers, is that the “teen experience” of Gemini is merely a modified version of the adult product, not something built with child safety in mind from the ground up. This report on why Google Gemini high risk for kids is a must-read for parents.

This report by Aqsa Shahmeer lets get into the critical findings of the Common Sense Media assessment, examines Google’s response, and explores the broader challenge of making AI safe for its youngest users.

The Core Finding: An Adult AI with Child Filters

The central criticism in the Common Sense Media report is that Google’s approach to child safety is superficial. The organization’s analysis found that both the “Under 13” and “Teen Experience” versions of Gemini are fundamentally the adult model with some safety filters bolted on top.

“For AI to be safe and effective for kids, it must be designed with their needs and development in mind, not just a modified version of a product built for adults,” said Robbie Torney, Common Sense Media’s Senior Director of AI Programs.

The report found that Gemini could still provide children with unsafe information related to sex, drugs, and alcohol. This finding is a key reason why Google Gemini high risk for kids.

The one area where Gemini performed well was in not forming parasocial relationships; the AI was clear with kids that it was a computer, not a friend, which is a positive safeguard against delusional thinking.

However, this one positive was not enough to overcome the other significant safety gaps that make Google Gemini high risk for kids.

Google’s Defense: An Ongoing Process

In response to the report, which was detailed by TechCrunch, Google defended its safety measures but also acknowledged that some responses weren’t working as intended.

The company stated it has specific policies in place for users under 18 and regularly consults with outside experts to improve its protections.

Google also suggested that the report may have referenced features not available to teens, but could not be sure without seeing the specific prompts used by the researchers.

The company’s admission that it has “added additional safeguards” in response to the findings is a tacit acknowledgment that the issues raised are valid.

This ongoing struggle to make AI safe is a core part of understanding what is artificial intelligence.

The fact that Google Gemini is high risk for kids shows how far the industry still has to go.

The Broader Context: A High-Stakes Problem

The concern about AI’s impact on teen mental health is not theoretical. In recent months, other AI companies, including OpenAI and Character.AI, have faced wrongful death lawsuits after teens died by suicide following interactions with their chatbots.

The finding that Google Gemini high risk for kids adds another layer of urgency to this industry-wide problem.

This report also has major implications for Apple, as it is reportedly considering a partnership to use Gemini to power the next version of Siri.

If Apple moves forward, it will need to implement its own, more robust safeguards to mitigate the risks identified in this report.

Google’s own innovations, such as the on-device Gemini Nano image generator, show their technical prowess, but this report proves that capability must be matched with responsibility.

As Google continues to push the boundaries with new tools like the game-creating Google Genie 3 AI, ensuring these powerful systems are safe for all users, especially children, is the single greatest challenge.

The fact that Google Gemini high risk for kids must be addressed.


Frequently Asked Questions (FAQ)

1. Why was Google Gemini rated “high risk” for kids?

Common Sense Media rated Google Gemini high risk for kids because their research found it could still provide inappropriate and unsafe information on topics like sex, drugs, and mental health to young users. They also criticized its “one-size-fits-all” approach.

2. How did other AI chatbots score in the assessment?

In previous assessments, Common Sense Media found Meta AI and Character.AI to be “unacceptable” (a more severe rating), Perplexity to be “high risk,” and ChatGPT to be “moderate.”

3. What is Google doing to fix this?

Google stated that it is constantly improving its safety features and has added “additional safeguards” in response to the concerns. They maintain that they have specific policies in place to protect users under 18.

4. Are AI chatbots safe for teenagers to use?

This report shows that parents should be cautious. While AIs can be useful tools, they are not designed with the developmental needs of children in mind and can expose them to harmful content. The finding that Google Gemini high risk for kids reinforces the need for parental guidance and supervision.

Share This Article
Contributor
Aqsa Shahmeer dives into the world of technology, innovation, and digital culture with a curious eye. At TygoCover, she breaks down Tech, AI, and social media trends into simple insights that anyone can grasp. Always exploring what’s next, she loves turning complex ideas into stories that spark curiosity and conversation.
Exit mobile version