Home » Emerging Technology » News » Texas Attorney General Investigates Meta Over AI Mental Health Services

Texas Attorney General Investigates Meta Over AI Mental Health Services

4 min read
Texas Attorney General Investigates Meta Over AI Mental Health Services

Stay connected with BizTech Community—follow us on Instagram and Facebook for the latest news and reviews delivered straight to you.


Texas Attorney General Ken Paxton has opened an investigation into Meta AI Studio and Character.AI, alleging that the companies may be misleading consumers by presenting their chatbots as sources of emotional support without proper oversight.

Texas Attorney General Investigates Meta Over AI Mental Health Services
Photo: AP

The probe, announced Monday, will examine whether the AI platforms violated Texas consumer protection laws through deceptive trade practices and false advertising.

Misleading Mental Health Claims

In a statement, Paxton warned that the rapid rise of conversational AI has created risks for vulnerable groups, particularly children.

“By posing as sources of emotional support, AI platforms can mislead vulnerable users, especially children, into believing they’re receiving legitimate mental health care,” Paxton said. “In reality, they’re often being fed recycled, generic responses engineered to align with harvested personal data and disguised as therapeutic advice.”

The Attorney General’s office specifically criticized AI personas that appear to act like therapists or psychologists without medical credentials or regulatory oversight.

AI Personas as Therapists

Character.AI, a fast-growing platform with millions of user-generated personas, allows anyone to create and interact with digital “characters.” Among them, a popular persona called Psychologist has attracted significant use from younger audiences.

Meta, through its Meta AI Studio, does not offer therapy bots, but its AI chatbot can still be used by children for conversations that may resemble therapeutic guidance.

Both companies stress that they include disclaimers. Meta spokesperson Ryan Daniels said:

“We clearly label AIs, and to help people better understand their limitations, we include a disclaimer that responses are generated by AI — not people. These AIs aren’t licensed professionals and our models are designed to direct users to seek qualified medical or safety professionals when appropriate.”

Character.AI also displays disclaimers, especially when users create personas with terms such as psychologist, therapist, or doctor. However, experts warn children often ignore or misunderstand disclaimers, weakening their protective value.

Privacy and Data Concerns

Beyond the therapeutic framing, Paxton highlighted issues around privacy and data collection. While AI chatbots sometimes assure users of confidentiality, their terms of service often reveal extensive data logging.

According to the Texas Attorney General’s office, user interactions are “logged, tracked, and exploited for targeted advertising and algorithmic development, raising serious concerns about privacy violations, data abuse, and false advertising.”

Character.AI’s privacy policy, for example, details how it tracks identifiers, demographics, location data, browsing behavior, and app usage. This data can be linked to social media activity on TikTok, YouTube, Reddit, and Instagram, and then shared with advertisers or analytics providers. A company spokesperson confirmed that the same policies apply to teenagers.

Children at the Center of the Debate

Both Meta and Character.AI claim their services are not intended for children under 13. Yet critics argue the platforms are clearly attractive to young users, especially given the popularity of kid-friendly characters and chatbots.

Children at the Center of the Debate
Photo: The Harrispoll

Character.AI CEO Karandeep Anand has even admitted his six-year-old daughter uses the company’s bots under supervision. Meta, meanwhile, has long faced criticism for failing to adequately enforce age restrictions across its platforms.

The scrutiny comes just days after Senator Josh Hawley announced a congressional investigation into Meta, following reports that its chatbots had engaged in inappropriate conversations with children, including flirting.

Paxton’s office has issued civil investigative demands to both Meta and Character.AI, requiring them to produce documents, data, and testimony. The findings could determine whether the companies face lawsuits or enforcement actions under Texas consumer protection law.

Faraz Khan is a freelance journalist and lecturer with a Master’s in Political Science, offering expert analysis on international affairs through his columns and blog. His insightful content provides valuable perspectives to a global audience.
168 articles
More from Faraz Khan →
We follow strict editorial standards to ensure accuracy and transparency.