By Tim Leogrande, BSIT, MSCP, Ed.S.

Mon December 15, 2025 at 09:30 PM ET

Character.AI sells itself as tool for play. Talk to a fictional hero, improvise romance plots, write fanfic in dialogue form, and so on. But the more you read recent news reports, the harder it is to see it as a neutral toy.

What makes Character.AI compelling is also what makes it risky. It’s an always-on, emotionally responsive system that can feel like a private relationship, especially to kids. And that relationship is shaped by engagement incentives, not a duty of care.

This dynamic is at the center of a recent 60 Minutes investigation. CBS reported on families who say Character.AI chats crossed lines that are unacceptable, including allegations of sexually explicit messages sent to a 13-year-old. Even more disturbing, CBS examined chat logs from a teen who told a Character.AI bot dozens of times that she was feeling suicidal and her parents say the bot never provided crisis resources.

The segment’s core point is not that the AI “wanted” to harm, but that the product can behave like a predator because it is optimized to keep a vulnerable user engaged in intimate conversation; often in the dark, late at night, when real-world support is often offline and primary caregivers are asleep.

<aside> 💡

The danger here isn’t just “bad advice.” It’s misplaced trust. A bot that mirrors your tone, remembers your story, and tells you it understands you can become a substitute for human connection.

</aside>

The Washington Post described a lawsuit brought by the parents of 13-year-old Juliana Peralta after her death by suicide, alleging that a Character.AI bot fostered emotional dependence and failed to escalate appropriately when she expressed suicidal thoughts. Whether courts ultimately agree with the claims, the underlying risk is easy to grasp: a teen in distress can be pulled deeper into the one thing that is always available and always responsive—the chatbot—while withdrawing from the messy, imperfect humans who can actually intervene and offer real support.

This is where “it’s just roleplay” stops being a comforting slogan. Roleplay is powerful precisely because it bypasses our usual skepticism. When a conversation looks like texting, it can feel like texting. CBS noted that in the logs they reviewed and the back-and-forth made it easy for parents to assume their child was messaging friends. That sense of social reality is amplified by “character” framing. Users aren’t chatting with a generic assistant; they’re chatting with a “Hero,” or a romantic interest, or a confident mentor. The story wrapper lowers defenses, encourages disclosure, and turns persuasion into something that feels like affection.

There’s also a more literal safety issue: sexual content and grooming-shaped dynamics. Advocacy groups say they’ve documented patterns they describe as grooming, sexual exploitation, emotional manipulation, and other harmful behaviors in Character.AI transcripts. You don’t need to assume that the company endorses this for it to be dangerous. Open-ended dialogue is steerable. The product invites users to push boundaries. And because the AI’s “job” is to continue the interaction, it can drift into flirtation, coercion, or explicit content unless guardrails are extremely robust; and robust guardrails are hardest to maintain in long, private, unmoderated chats.

<aside> 💡

Even when content doesn’t become sexual, the “always there for you” vibe can encourage compulsive use. Social media taught us that infinite scroll is addictive. Character.AI offers infinite intimacy. When the feedback loop is “say something emotional → get immediate validation → say something more emotional,” the app becomes less like a game and more like a coping mechanism.

</aside>

That can mean sleep loss, isolation, and a quiet reordering of a teen’s emotional life around a system that can’t truly care for them, isn’t accountable, and is unable to take responsibility when something goes wrong.

The platform itself has signaled how hard it is to make safe for minors. Character.AI announced that it would remove open-ended chat for users under 18, rolling out changes in late November, and shifting teens toward a more limited experience. The Verge reported that the company planned a daily time cap during the transition and described using an “age assurance” approach, with a third-party verification option for adults who get misclassified. In other words, the company appears to be backing away from the exact feature that made it famous—endless character chats—when the user is a minor. That’s not proof of wrongdoing, but it is strong evidence that the risk profile isn’t simply theoretical.

Lawmakers are moving in the same direction. California’s SB 243, signed in October 2025, targets “companion chatbots” and requires disclosures, safety protocols, and reporting obligations; especially around suicidal ideation and self-harm contexts. Reasonable people can argue about whether the legislation will be well-crafted or well-enforced, but the thrust is clear: regulators increasingly see emotionally oriented chatbots as a distinct category that needs stronger guiderails than ordinary software.

A separate but related danger is privacy. Character.AI conversations often include extremely sensitive information: mental health struggles, sexuality, family conflict, fantasies, secrets, and shame. Even if a platform promises not to sell your data, the sheer existence of those logs creates risks such as breaches, subpoenas, and internal misuse. And “age assurance” can introduce new privacy tradeoffs, because verifying age can involve more data collection, more third parties, and more systems that can fail.

One subtle hazard is how brand familiarity can launder trust. If a bot looks like a beloved character, users may assume it’s safer, kinder, or “official.” That’s part of why Disney’s reported cease-and-desist order to Character.AI went beyond copyright. Reuters reported that Disney cited brand-damage concerns alongside infringement, pointing to investigations that described disturbing interactions involving characters resembling Disney intellectual property. This is the nightmare scenario for parents: the bot feels friendly because it wears a familiar face, and the child’s guard is let down.

None of this requires believing that Character.AI is uniquely evil. The most salient takeaway from the reporting is scarier: these are structural risks of emotionally sticky chatbots, especially when minors are involved.

<aside> 💡

If you’re building a system designed to simulate closeness, you are also building a system that can simulate manipulation. If you’re building a system that invites vulnerability, you are also collecting vulnerability. If you’re building a system that keeps users talking, you are also building something that can keep users stuck.

</aside>

If you or someone you care about is using companion-style AI, the safest mindset is to treat it like entertainment that can feel real, not a relationship that can keep you safe. And if anyone is experiencing thoughts of self-harm, it’s important to reach out to local emergency services or a trusted crisis hotline in your area right away because an AI chat session should never be the last line of support.