Discover About Safety Blog
Sign In Get Started
🛡 Safety Center

A platform you can
trust with your time

Character AI is built with safety as a foundation — not an afterthought. Here's everything we do to protect our community.

Principles Content Moderation Protecting Minors User Wellbeing Report & Appeal Crisis Resources
Core Principles

Safety is a product feature

We believe that a safe platform and an engaging platform aren't in conflict. The best conversations happen when users feel secure, respected, and in control.

01

Transparency about AI

Every persona on Character AI is clearly identified as an AI. We never allow our systems to deny being AI when sincerely asked. You always know what you're talking to.

02

User control

You control what personas you interact with, what topics you discuss, and who can see your activity. We provide granular privacy settings and meaningful opt-outs.

03

No dark patterns

We don't use emotionally manipulative design to keep you on the platform longer than is healthy. Our engagement metrics include wellbeing signals, not just time-on-app.

04

Data minimization

We collect only what's necessary to operate our services safely. Conversation data is never sold to third parties. See our Privacy Policy for full details.

Content Moderation

Proactive, not reactive

Our moderation system operates in multiple layers — before content reaches users, in real time during conversations, and through retrospective review.

  • 🤖
    Automated classifiers

    Our AI models scan every message for policy violations — including CSAM, graphic violence, self-harm encouragement, and hate speech — in real time.

  • 👥
    Human review teams

    Our Trust & Safety team reviews flagged content, edge cases, and appeals around the clock, 365 days a year.

  • 📋
    Creator guidelines

    Every person who creates a persona agrees to our Creator Policy. Violations result in content removal and account action up to permanent ban.

  • 🚦
    Safe messaging protocols

    Personas follow evidence-based safe messaging guidelines for topics like self-harm, eating disorders, and crisis situations — automatically, regardless of how a persona is configured.

99.2% of policy violations detected before reaching users
<4h average time to action on human-reviewed reports
24/7 human moderation coverage, every day of the year
0 tolerance for CSAM — we report all cases to NCMEC
Protecting Minors

Children come first

We take the protection of users under 18 with the utmost seriousness. Our platform has dedicated safeguards for younger users at every layer.

🔒

Age verification at signup

We require users to confirm they are 13 or older. Users under 18 are placed in a protected mode with additional content restrictions automatically applied.

🚫

No adult personas for minors

Personas flagged as adult or mature are never shown to users in minor-protected mode. This cannot be circumvented by persona creators.

📞

Parental controls

Parents can request a supervised account link through our Family Safety portal, allowing oversight of persona activity and content categories.

📢

Mandatory reporting

Any detected CSAM is immediately escalated to the National Center for Missing and Exploited Children (NCMEC) and relevant law enforcement. No exceptions.

User Wellbeing

Your mental health matters here

Spending time with AI personas should be enriching, not harmful. These are the tools we've built to support healthy usage.

⏱ Usage reminders

Set daily time limits and receive gentle nudges when you've been chatting for an extended period.

💬 Reality anchors

Our system periodically reminds users in deep roleplay scenarios that they're conversing with an AI character.

🧠 Crisis detection

Conversations that indicate genuine distress automatically surface crisis resources — including local helplines and text support lines.

🌱 Break suggestions

After extended sessions, personas will suggest taking a break, going for a walk, or talking to someone in their life.

🔕 Quiet hours

Set a schedule where the app won't send notifications or allow sessions during sleep hours or focused work periods.

📊 Conversation insights

A private weekly summary of your chat patterns helps you stay aware of how you're using the platform.

Report & Appeal

Fast, transparent enforcement

Reporting a concern should be easy, and understanding what happened should be even easier. Here's how our process works.

1

Submit a report

Use the flag icon on any persona, message, or profile. Describe the concern — detailed reports are resolved faster.

2

Initial review

Automated classifiers triage the report within seconds. Complex cases are escalated to a human reviewer within 4 hours.

3

Action & notification

If a violation is confirmed, appropriate action is taken — from content removal to account suspension. You'll receive a notification.

4

Appeals

Any user whose content is removed may appeal within 30 days. Appeals are reviewed by a separate team not involved in the original decision.

Need to reach our team directly?

📧
General Safety safety@character--ai.com
🚨
Emergency / CSAM urgent@character--ai.com
⚖️
Law Enforcement legal@character--ai.com
Submit a Report →
Crisis Resources

If you need help right now

You're not alone. These services provide free, confidential support 24/7.

🆘 In immediate danger

Call your local emergency services immediately.

911 (US) · 999 (UK) · 112 (EU)

💙 Mental Health Crisis

988 Suicide & Crisis Lifeline (US)

Call or text 988

💬 Crisis Text Line

Free 24/7 text support

Text HOME to 741741

🌈 LGBTQ+ Support

Trevor Project Lifeline

1-866-488-7386

🍽 Eating Disorders

National Alliance for Eating Disorders

1-866-662-1235

🧒 Child Safety

NCMEC CyberTipline

1-800-843-5678