By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
tygo cover main logo light
  • Latest
  • AI
  • Coding
  • Cyber Security
  • Gadgets
  • Gaming
  • More
    • Automotive Technology
    • PC & Software
    • Startups
    • Tech Lifestyle
Reading: Google Gemini High Risk for Kids, New Safety Report Warns
Font ResizerAa
Tygo CoverTygo Cover
Search
  • Home
  • AI
  • Automotive Technology
  • Coding & Development
  • Cyber Security
  • Gadgets & Reviews
  • Gaming
  • Startups
Follow US
  • About Us
  • Terms & Conditions
  • Disclaimer
  • Privacy Policy
  • Copyright Policy (DMCA)
  • Cookie Policy
  • Contact
Copyright © 2025 Tygo Cover. All Rights Reserved.
Tygo Cover > AI > Google Gemini High Risk for Kids, New Safety Report Warns

Google Gemini High Risk for Kids, New Safety Report Warns

Aqsa Shahmeer
Last updated: September 8, 2025 1:07 am
Aqsa Shahmeer
AI
Share
7 Min Read
The Google Gemini logo behind a child safety warning sign, representing the report that found Google Gemini high risk for kids.
"High Risk": Google Gemini is Failing to Protect Kids and Teens, New Report Warns - By Aqsa Shahmeer

In a damning new safety assessment, Google’s flagship AI, Gemini, has been labeled “High Risk” for children and teenagers by the influential nonprofit Common Sense Media. The report reveals that despite having special tiers for younger users, the AI is still capable of sharing inappropriate and unsafe material, including information on sensitive topics and unsafe mental health advice.

This finding that Google Gemini high risk for kids is a major blow to the company’s efforts to position its AI as a safe and helpful tool for everyone.

The assessment comes at a critical time, as AI chatbots are facing intense scrutiny for their potential to cause harm to vulnerable users. With news that Apple may be integrating Gemini into its next version of Siri, the concerns raised in this report are more urgent than ever.

The core of the problem, according to researchers, is that the “teen experience” of Gemini is merely a modified version of the adult product, not something built with child safety in mind from the ground up. This report on why Google Gemini high risk for kids is a must-read for parents.

This report by Aqsa Shahmeer lets get into the critical findings of the Common Sense Media assessment, examines Google’s response, and explores the broader challenge of making AI safe for its youngest users.

More Read

After shocking Silicon Valley with its last model, the DeepSeek AI agent is coming. Owais Makkabi reports on China's next move and the rising national security concerns.
DeepSeek AI Agent: China’s Next Move in the Global AI Race
AI vs Machine Learning: A simple guide explaining the key differences.
AI vs Machine Learning: What’s the Real Difference? (Simple Guide)
Google's Gemini Nano Banana AI image generator can create images in seconds, directly on your phone. Basma Imam explains why this on-device AI is a massive leap forward.
Gemini Nano Banana AI Image Generator: A Photoshop Killer?

The Core Finding: An Adult AI with Child Filters

The central criticism in the Common Sense Media report is that Google’s approach to child safety is superficial. The organization’s analysis found that both the “Under 13” and “Teen Experience” versions of Gemini are fundamentally the adult model with some safety filters bolted on top.

“For AI to be safe and effective for kids, it must be designed with their needs and development in mind, not just a modified version of a product built for adults,” said Robbie Torney, Common Sense Media’s Senior Director of AI Programs.

The report found that Gemini could still provide children with unsafe information related to sex, drugs, and alcohol. This finding is a key reason why Google Gemini high risk for kids.

The one area where Gemini performed well was in not forming parasocial relationships; the AI was clear with kids that it was a computer, not a friend, which is a positive safeguard against delusional thinking.

However, this one positive was not enough to overcome the other significant safety gaps that make Google Gemini high risk for kids.

Google’s Defense: An Ongoing Process

In response to the report, which was detailed by TechCrunch, Google defended its safety measures but also acknowledged that some responses weren’t working as intended.

The company stated it has specific policies in place for users under 18 and regularly consults with outside experts to improve its protections.

Google also suggested that the report may have referenced features not available to teens, but could not be sure without seeing the specific prompts used by the researchers.

The company’s admission that it has “added additional safeguards” in response to the findings is a tacit acknowledgment that the issues raised are valid.

This ongoing struggle to make AI safe is a core part of understanding what is artificial intelligence.

The fact that Google Gemini is high risk for kids shows how far the industry still has to go.

More Read

Google's Gemini AI logo glowing on a trophy, symbolizing its gold medal win after Gemini AI solves ICPC problem.
Gemini AI Solves ICPC Problem That Stumped 139 Human Teams
A top AI startup rejected a $1 billion offer from Meta. I'm diving into why the Meta AI offer rejected and what it says about the AI talent war.
Meta AI Offer Rejected: Why a Startup Said No to $1Billion
Mark Zuckerberg recent demo of Meta AI smart glasses failure, leading to embarrassment.
Meta AI smart glasses failure: A Major Embarrassment for Meta’s AI Vision

The Broader Context: A High-Stakes Problem

The concern about AI’s impact on teen mental health is not theoretical. In recent months, other AI companies, including OpenAI and Character.AI, have faced wrongful death lawsuits after teens died by suicide following interactions with their chatbots.

The finding that Google Gemini high risk for kids adds another layer of urgency to this industry-wide problem.

This report also has major implications for Apple, as it is reportedly considering a partnership to use Gemini to power the next version of Siri.

If Apple moves forward, it will need to implement its own, more robust safeguards to mitigate the risks identified in this report.

Google’s own innovations, such as the on-device Gemini Nano image generator, show their technical prowess, but this report proves that capability must be matched with responsibility.

As Google continues to push the boundaries with new tools like the game-creating Google Genie 3 AI, ensuring these powerful systems are safe for all users, especially children, is the single greatest challenge.

The fact that Google Gemini high risk for kids must be addressed.


Frequently Asked Questions (FAQ)

1. Why was Google Gemini rated “high risk” for kids?

Common Sense Media rated Google Gemini high risk for kids because their research found it could still provide inappropriate and unsafe information on topics like sex, drugs, and mental health to young users. They also criticized its “one-size-fits-all” approach.

2. How did other AI chatbots score in the assessment?

In previous assessments, Common Sense Media found Meta AI and Character.AI to be “unacceptable” (a more severe rating), Perplexity to be “high risk,” and ChatGPT to be “moderate.”

3. What is Google doing to fix this?

Google stated that it is constantly improving its safety features and has added “additional safeguards” in response to the concerns. They maintain that they have specific policies in place to protect users under 18.

4. Are AI chatbots safe for teenagers to use?

This report shows that parents should be cautious. While AIs can be useful tools, they are not designed with the developmental needs of children in mind and can expose them to harmful content. The finding that Google Gemini high risk for kids reinforces the need for parental guidance and supervision.

TAGGED:AIArtificial IntelligenceGeminiGemini 2.5 FlashGoogleGoogle AI
Share This Article
LinkedIn Reddit Email Copy Link
blank
ByAqsa Shahmeer
Contributor
Aqsa Shahmeer dives into the world of technology, innovation, and digital culture with a curious eye. At TygoCover, she breaks down Tech, AI, and social media trends into simple insights that anyone can grasp. Always exploring what’s next, she loves turning complex ideas into stories that spark curiosity and conversation.
Comparison showing the benefits of AI text-to-speech vs recording.
7 Real Benefits of AI Text-to-Speech for Creators 2025
Tech Lifestyle
The launch of the Mistral 3 AI model from France.
Mistral 3 AI Model: Launches The New Open-Weight AI That Rivals GPT-4
AI
A warning showing that Windows 10 support ended.
Windows 10 Support Ended Officially: Your PC Is Now at Risk
PC & Software Cyber Security
The Google Pixel November Drop bringing new AI features.
Google Pixel November Drop: New AI Features in 2025
Gadgets & Reviews
A visual metaphor explaining what is phishing.
What is Phishing? A Simple Guide to Spotting Scams
Cyber Security
The Apple iOS 26.1 update on an iPhone screen.
Apple iOS 26.1 Update Issues Urgent: Update Your iPhone Now
Cyber Security
  • About Us
  • Terms & Conditions
  • Disclaimer
  • Privacy Policy
  • Copyright Policy (DMCA)
  • Cookie Policy
  • Contact

Tygo Cover is your guide to the world of technology.

We deliver clear, expert analysis on everything that matters from AI and Auto Tech to Cyber Security and the business of startups. Tech, simplified.

Copyright © 2025 Tygo Cover. All Rights Reserved.

Go to mobile version
Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?