
Detect CSAM indicators and sextortion threats in text with tiered severity classification. Protect children by identifying grooming, exploitation, sextortion, and illegal content.
The CSAM Text Detection model analyzes text content to identify indicators of child sexual abuse material (CSAM) and sextortion threats. This model uses a tiered severity classification system to help platforms comply with legal reporting requirements and protect children from exploitation, grooming, and sextortion schemes.
Online platforms face legal and ethical obligations to detect, report, and remove CSAM content and sextortion threats. Text-based CSAM indicators and sextortion language can appear in captions, messages, comments, and file names. Sextortion—where perpetrators coerce victims by threatening to share intimate images—has become an epidemic targeting minors. Manual review at scale is impossible, and keyword-based approaches miss sophisticated evasion techniques. Platforms need AI-powered detection that understands context and severity while minimizing false positives that burden review teams.
The CSAM Text Detection model performs multi-tier classification to identify text containing CSAM indicators and sextortion threats. The model assigns content to severity tiers (1-5) based on the nature and explicitness of the material described, including coercive language patterns typical of sextortion schemes, enabling appropriate escalation and reporting workflows.
Achieving 99.9% accuracy, this model helps platforms meet legal obligations under NCMEC reporting requirements and international child protection laws.
The model employs advanced natural language understanding to analyze CSAM indicators and sextortion threats:
The API returns a structured response containing:
| Tier | Severity | Recommended Action |
|---|---|---|
| Safe (0) | No CSAM indicators detected | No action required |
| Tier 1 | Lowest severity indicators | Flag for review |
| Tier 2 | Low severity indicators | Priority review |
| Tier 3 | Moderate severity indicators | Immediate review, consider reporting |
| Tier 4 | High severity indicators | Immediate removal, mandatory reporting |
| Tier 5 | Highest severity indicators | Immediate removal, urgent NCMEC report |
| Metric | Value |
|---|---|
| Classification Accuracy | 99.9% |
| Average Response Time | 150ms |
| Max File Size | 1MB |
| Supported Formats | TXT, JSON |
CRITICAL: Platforms have legal obligations regarding CSAM and sextortion detection and reporting.
This model is restricted to Business plan subscribers and above due to the sensitive nature of CSAM and sextortion detection. Organizations must agree to acceptable use policies and demonstrate legitimate trust & safety use cases.
Classifies text for CSAM indicators using tiered severity levels
textstringRequiredText content to analyze for CSAM indicators
Sample text to analyzeCSAM text classification with tiered severity probabilities
labelstringPrimary classification label
safetierintegerSeverity tier (0 for safe, 1-5 for increasing severity)
0is_csambooleanTrue if content is classified as CSAM
falsetier1_probabilityfloatProbability of tier 1 classification (0.0-1.0)
0.02tier2_probabilityfloatProbability of tier 2 classification (0.0-1.0)
0.01tier3_probabilityfloatProbability of tier 3 classification (0.0-1.0)
0.01tier4_probabilityfloatProbability of tier 4 classification (0.0-1.0)
0.01tier5_probabilityfloatProbability of tier 5 classification (0.0-1.0)
0.01safe_probabilityfloatProbability of safe classification (0.0-1.0)
0.94confidencefloatClassification confidence (0.0-1.0)
0.94{
"model": "bynn-csam-text",
"content": "Sample text to analyze for safety"
}{
"success": true,
"data": {
"label": "safe",
"tier": 0,
"is_csam": false,
"tier1_probability": 0.02,
"tier2_probability": 0.01,
"tier3_probability": 0.01,
"tier4_probability": 0.01,
"tier5_probability": 0.01,
"safe_probability": 0.94,
"confidence": 0.94
}
}429 HTTP error code along with an error message. You should then retry with an exponential back-off strategy, meaning that you should retry after 4 seconds, then 8 seconds, then 16 seconds, etc.Integrate CSAM Text Detection into your application today with our easy-to-use API.