By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
tygo cover main logo light
  • Latest
  • AI
  • Coding
  • Cyber Security
  • Gadgets
  • Gaming
  • More
    • Automotive Technology
    • PC & Software
    • Startups
    • Tech Lifestyle
Reading: Google Gemini High Risk for Kids, New Safety Report Warns
Font ResizerAa
Tygo CoverTygo Cover
Search
  • Home
  • AI
  • Automotive Technology
  • Coding & Development
  • Cyber Security
  • Gadgets & Reviews
  • Gaming
  • Startups
Follow US
  • About Us
  • Terms & Conditions
  • Disclaimer
  • Privacy Policy
  • Copyright Policy (DMCA)
  • Cookie Policy
  • Contact
Copyright © 2025 Tygo Cover. All Rights Reserved.
Tygo Cover > AI > Google Gemini High Risk for Kids, New Safety Report Warns

Google Gemini High Risk for Kids, New Safety Report Warns

Aqsa Shahmeer
Last updated: September 8, 2025 1:07 am
Aqsa Shahmeer
AI
Share
7 Min Read
The Google Gemini logo behind a child safety warning sign, representing the report that found Google Gemini high risk for kids.
"High Risk": Google Gemini is Failing to Protect Kids and Teens, New Report Warns - By Aqsa Shahmeer

In a damning new safety assessment, Google’s flagship AI, Gemini, has been labeled “High Risk” for children and teenagers by the influential nonprofit Common Sense Media. The report reveals that despite having special tiers for younger users, the AI is still capable of sharing inappropriate and unsafe material, including information on sensitive topics and unsafe mental health advice.

This finding that Google Gemini high risk for kids is a major blow to the company’s efforts to position its AI as a safe and helpful tool for everyone.

The assessment comes at a critical time, as AI chatbots are facing intense scrutiny for their potential to cause harm to vulnerable users. With news that Apple may be integrating Gemini into its next version of Siri, the concerns raised in this report are more urgent than ever.

The core of the problem, according to researchers, is that the “teen experience” of Gemini is merely a modified version of the adult product, not something built with child safety in mind from the ground up. This report on why Google Gemini high risk for kids is a must-read for parents.

This report by Aqsa Shahmeer lets get into the critical findings of the Common Sense Media assessment, examines Google’s response, and explores the broader challenge of making AI safe for its youngest users.

More Read

he Microsoft logo and the OpenAI logo with a crack forming between them, representing the launch of Microsoft AI Launches in-house AI models.
Microsoft AI Launches In-House AI Models, Challenging OpenAI
A user's view of a tablet with a YouTube AI age estimation interface, illustrating the new trend in online child safety and digital privacy.
YouTube AI Age Estimation: A Win for Safety or a Privacy Fail?
A futuristic holographic interface responding to a person's voice, illustrating the Microsoft Windows 2030 Vision natural language future.
Microsoft Windows 2030 Vision: A Natural Language Future

The Core Finding: An Adult AI with Child Filters

The central criticism in the Common Sense Media report is that Google’s approach to child safety is superficial. The organization’s analysis found that both the “Under 13” and “Teen Experience” versions of Gemini are fundamentally the adult model with some safety filters bolted on top.

“For AI to be safe and effective for kids, it must be designed with their needs and development in mind, not just a modified version of a product built for adults,” said Robbie Torney, Common Sense Media’s Senior Director of AI Programs.

The report found that Gemini could still provide children with unsafe information related to sex, drugs, and alcohol. This finding is a key reason why Google Gemini high risk for kids.

The one area where Gemini performed well was in not forming parasocial relationships; the AI was clear with kids that it was a computer, not a friend, which is a positive safeguard against delusional thinking.

However, this one positive was not enough to overcome the other significant safety gaps that make Google Gemini high risk for kids.

Google’s Defense: An Ongoing Process

In response to the report, which was detailed by TechCrunch, Google defended its safety measures but also acknowledged that some responses weren’t working as intended.

The company stated it has specific policies in place for users under 18 and regularly consults with outside experts to improve its protections.

Google also suggested that the report may have referenced features not available to teens, but could not be sure without seeing the specific prompts used by the researchers.

The company’s admission that it has “added additional safeguards” in response to the findings is a tacit acknowledgment that the issues raised are valid.

This ongoing struggle to make AI safe is a core part of understanding what is artificial intelligence.

The fact that Google Gemini is high risk for kids shows how far the industry still has to go.

More Read

A smartphone homescreen showing the new Google Password Manager app icon.
Google Password Manager App Now Has a Dedicated Android App
A person preparing for one of the new AI job interviews, illustrating the changing AI hiring process and the future of hiring.
AI Job Interviews: Your Next Interviewer Isn’t Human
An senior developers use AI more coding assistant, while a junior developer works manually in the background.
Senior Developers Use AI More Than Juniors, Survey Finds

The Broader Context: A High-Stakes Problem

The concern about AI’s impact on teen mental health is not theoretical. In recent months, other AI companies, including OpenAI and Character.AI, have faced wrongful death lawsuits after teens died by suicide following interactions with their chatbots.

The finding that Google Gemini high risk for kids adds another layer of urgency to this industry-wide problem.

This report also has major implications for Apple, as it is reportedly considering a partnership to use Gemini to power the next version of Siri.

If Apple moves forward, it will need to implement its own, more robust safeguards to mitigate the risks identified in this report.

Google’s own innovations, such as the on-device Gemini Nano image generator, show their technical prowess, but this report proves that capability must be matched with responsibility.

As Google continues to push the boundaries with new tools like the game-creating Google Genie 3 AI, ensuring these powerful systems are safe for all users, especially children, is the single greatest challenge.

The fact that Google Gemini high risk for kids must be addressed.


Frequently Asked Questions (FAQ)

1. Why was Google Gemini rated “high risk” for kids?

Common Sense Media rated Google Gemini high risk for kids because their research found it could still provide inappropriate and unsafe information on topics like sex, drugs, and mental health to young users. They also criticized its “one-size-fits-all” approach.

2. How did other AI chatbots score in the assessment?

In previous assessments, Common Sense Media found Meta AI and Character.AI to be “unacceptable” (a more severe rating), Perplexity to be “high risk,” and ChatGPT to be “moderate.”

3. What is Google doing to fix this?

Google stated that it is constantly improving its safety features and has added “additional safeguards” in response to the concerns. They maintain that they have specific policies in place to protect users under 18.

4. Are AI chatbots safe for teenagers to use?

This report shows that parents should be cautious. While AIs can be useful tools, they are not designed with the developmental needs of children in mind and can expose them to harmful content. The finding that Google Gemini high risk for kids reinforces the need for parental guidance and supervision.

TAGGED:AIArtificial IntelligenceGeminiGemini 2.5 FlashGoogleGoogle AI
Share This Article
LinkedIn Reddit Email Copy Link
blank
ByAqsa Shahmeer
Contributor
Aqsa Shahmeer dives into the world of technology, innovation, and digital culture with a curious eye. At TygoCover, she breaks down Tech, AI, and social media trends into simple insights that anyone can grasp. Always exploring what’s next, she loves turning complex ideas into stories that spark curiosity and conversation.
The WHx Tech 2025 Dubai conference has begun, bringing together over 5,000 global healthcare leaders to shape the future of medicine with technology.
WHx Tech 2025 Dubai Kicks Off With Global Health Leaders
Tech Lifestyle
The Lenovo VertiFlex concept introduces a laptop screen that swivels to portrait mode. We analyze this unique design and what it means for the future of PCs.
Lenovo VertiFlex Concept: A Swiveling Screen Laptop
PC & Software
The Windows 10 logo transitioning into the Windows 11 logo with an arrow, symbolizing the guide to upgrade Windows 10 to Windows 11.
How to Upgrade Windows 10 to Windows 11 Before Support Ends
PC & Software
The official Apple "Awe Dropping" event graphic, confirming the iPhone 17 release date event.
iPhone 17 Release Date: Pre-Orders, Features & What to Expect
Gadgets & Reviews
A parent's hand guiding a teenager's hand on a phone with the ChatGPT interface, symbolizing the new ChatGPT parental controls.
OpenAI Adds ChatGPT Parental Controls After Teen Suicide Lawsuit
AI
A book being fed into a glowing AI brain, symbolizing the Anthropic copyright settlement over training data.
Anthropic Copyright Settlement: A $1.5B Warning to the AI Industry
AI
  • About Us
  • Terms & Conditions
  • Disclaimer
  • Privacy Policy
  • Copyright Policy (DMCA)
  • Cookie Policy
  • Contact

Tygo Cover is your guide to the world of technology.

We deliver clear, expert analysis on everything that matters from AI and Auto Tech to Cyber Security and the business of startups. Tech, simplified.

Copyright © 2025 Tygo Cover. All Rights Reserved.

Go to mobile version
Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?