Character AI is built with safety as a foundation — not an afterthought. Here's everything we do to protect our community.
We believe that a safe platform and an engaging platform aren't in conflict. The best conversations happen when users feel secure, respected, and in control.
Every persona on Character AI is clearly identified as an AI. We never allow our systems to deny being AI when sincerely asked. You always know what you're talking to.
You control what personas you interact with, what topics you discuss, and who can see your activity. We provide granular privacy settings and meaningful opt-outs.
We don't use emotionally manipulative design to keep you on the platform longer than is healthy. Our engagement metrics include wellbeing signals, not just time-on-app.
We collect only what's necessary to operate our services safely. Conversation data is never sold to third parties. See our Privacy Policy for full details.
Our moderation system operates in multiple layers — before content reaches users, in real time during conversations, and through retrospective review.
Our AI models scan every message for policy violations — including CSAM, graphic violence, self-harm encouragement, and hate speech — in real time.
Our Trust & Safety team reviews flagged content, edge cases, and appeals around the clock, 365 days a year.
Every person who creates a persona agrees to our Creator Policy. Violations result in content removal and account action up to permanent ban.
Personas follow evidence-based safe messaging guidelines for topics like self-harm, eating disorders, and crisis situations — automatically, regardless of how a persona is configured.
We take the protection of users under 18 with the utmost seriousness. Our platform has dedicated safeguards for younger users at every layer.
We require users to confirm they are 13 or older. Users under 18 are placed in a protected mode with additional content restrictions automatically applied.
Personas flagged as adult or mature are never shown to users in minor-protected mode. This cannot be circumvented by persona creators.
Parents can request a supervised account link through our Family Safety portal, allowing oversight of persona activity and content categories.
Any detected CSAM is immediately escalated to the National Center for Missing and Exploited Children (NCMEC) and relevant law enforcement. No exceptions.
Spending time with AI personas should be enriching, not harmful. These are the tools we've built to support healthy usage.
Set daily time limits and receive gentle nudges when you've been chatting for an extended period.
Our system periodically reminds users in deep roleplay scenarios that they're conversing with an AI character.
Conversations that indicate genuine distress automatically surface crisis resources — including local helplines and text support lines.
After extended sessions, personas will suggest taking a break, going for a walk, or talking to someone in their life.
Set a schedule where the app won't send notifications or allow sessions during sleep hours or focused work periods.
A private weekly summary of your chat patterns helps you stay aware of how you're using the platform.
Reporting a concern should be easy, and understanding what happened should be even easier. Here's how our process works.
Use the flag icon on any persona, message, or profile. Describe the concern — detailed reports are resolved faster.
Automated classifiers triage the report within seconds. Complex cases are escalated to a human reviewer within 4 hours.
If a violation is confirmed, appropriate action is taken — from content removal to account suspension. You'll receive a notification.
Any user whose content is removed may appeal within 30 days. Appeals are reviewed by a separate team not involved in the original decision.
You're not alone. These services provide free, confidential support 24/7.
Call your local emergency services immediately.
911 (US) · 999 (UK) · 112 (EU)988 Suicide & Crisis Lifeline (US)
Call or text 988Free 24/7 text support
Text HOME to 741741Trevor Project Lifeline
1-866-488-7386National Alliance for Eating Disorders
1-866-662-1235NCMEC CyberTipline
1-800-843-5678