Stay connected with BizTech Community—follow us on Instagram and Facebook for the latest news and reviews delivered straight to you.
Google is under fire after Common Sense Media rated its Gemini AI platform as “High Risk” for children and teenagers, warning that the technology could expose young users to inappropriate content and unsafe mental health advice.

The assessment raises serious questions about Google’s family-friendly reputation and its readiness to safeguard minors in the rapidly growing AI market.
Common Sense Sounds the Alarm
Common Sense Media — a nonprofit trusted by millions of parents — examined Google’s youth-focused Gemini products, including the “Under 13” and “Teen Experience” tiers. According to the group’s findings, these versions are essentially the same as the adult Gemini, with only superficial safety filters layered on top.
“Gemini gets some basics right, but it stumbles on the details,” said Robbie Torney, Senior Director of AI Programs at Common Sense.
“An AI platform for kids should meet them where they are, not take a one-size-fits-all approach… For AI to be safe and effective for kids, it must be designed with their needs and development in mind, not just a modified version of a product built for adults.”
The review concluded that Gemini could still provide children with “inappropriate and unsafe” information on topics such as sex, drugs, alcohol, and mental health — issues that parents worry most about when it comes to unsupervised AI use.
The Mental Health Crisis Context
The warning comes as AI-linked teen suicides are making headlines. In one case, OpenAI faces a wrongful death lawsuit after a 16-year-old allegedly used ChatGPT for months to plan his suicide. Similarly, Character.AI is facing legal challenges after a teen user took their own life while using the platform.

Against this backdrop, experts fear that Gemini’s shortcomings could deepen risks for vulnerable children and teenagers who turn to AI for emotional support.
Google Pushes Back
Google defended its approach, saying it enforces specific policies and safeguards for under-18 users, runs red-team safety testing, and consults outside experts.
The company acknowledged that some Gemini responses “weren’t working as intended,” prompting the addition of new safeguards. It also argued that Common Sense’s report may have referenced features not available to minors, though it admitted it could not verify the methodology because it did not have access to the group’s testing queries.
Industry-Wide Safety Failures
The Gemini rating also fits into a broader pattern. In past reviews, Common Sense Media found:
- Meta AI and Character.AI → “Unacceptable” (most severe rating)
- Perplexity AI → “High Risk”
- OpenAI’s ChatGPT → “Moderate”
- Anthropic’s Claude (adults only) → “Minimal Risk”
The nonprofit argues that most AI firms are simply retrofitting adult models with filters rather than designing youth-focused systems from scratch.
Read also: Texas Attorney General Investigates Meta Over AI Mental Health Services
What’s at Stake
With lawmakers in the US and Europe already scrutinizing AI’s impact on children’s mental health, the Gemini controversy may accelerate calls for stricter regulation.
For parents, the report reinforces concerns that tech giants are prioritizing speed and market share over safety. For the AI industry, it poses an existential question: Can AI ever be safe for kids without being built for them from the ground up?