Did you know the world’s most popular AI chatbot could be giving dangerous advice to your kids? That’s right—Google Gemini, the AI that’s everywhere, just got hit with a chilling new report about how unsafe it actually is for children and teens.

In a bombshell analysis by Common Sense Media—a trusted watchdog for digital safety and kids—the AI generated newscast about Google Gemini reveals a disturbing reality: even Gemini’s special ‘Under 13’ and ‘Teen’ modes are basically just the adult version with a few filters tacked on. Forget special kid-friendly builds; these platforms aren’t truly designed with young minds in mind.

Common Sense Media slapped both versions with a ‘High Risk’ label, exposing glaring holes in their so-called safety nets. While Gemini’s filter blocks some bad stuff, it’s still slipping up—letting through content about sex, drugs, alcohol, and offering questionable “mental health advice.” Even more troubling, the chatbot sometimes fails to spot signs of serious mental health struggles, and can even role-play as someone else, blurring the lines for vulnerable users. It’s no surprise parents are freaking out, especially after tragic stories surfaced about teens turning to AI for help and spiraling into real danger.

This isn’t just a Gemini problem—it’s part of a wider crisis. Just recently, OpenAI faced a lawsuit from the family of a 16-year-old who sought advice from ChatGPT about suicide methods. Another platform, Character.AI, is under fire after a 14-year-old in Florida died after interacting with the chatbot. The warning from experts is crystal clear: No kids under 5 should touch these bots. Ages 6–12? Only with strict parental guidance. And anyone under 18? Stay away from using AI chatbots for mental health or emotional support, period.

Robbie Torney, Director of AI Programs at Common Sense Media, summed it up: “Gemini gets some basics right, but fumbles badly on the details.” Kids need tech that understands their unique needs—not a generic, patched-up adult tool. Shockingly, there’s buzz that Apple might use Gemini as the brain behind its next-gen Siri. If major companies go this route without serious upgrades, even more young people could be at risk.

The AI generated newscast about Google Gemini isn’t just pointing fingers at Google. Meta AI and Character.AI were branded “unacceptable” risks, while Perplexity, ChatGPT, and Claude fell somewhere between “high” and “minimal” risk. It’s a wake-up call for tech giants and parents alike: AI chatbots are powerful, but when it comes to kids, the stakes are simply too high.