
Detect unsafe audio content including harassment, profanity, discrimination, and illegal content
Detects unsafe audio content including hate speech, harassment, profanity, and other toxicity
audio_urlstringRequiredURL of audio file to analyze for unsafe content
https://example.com/audio.mp3Audio safety analysis with multi-category classification
is_unsafebooleanTrue if unsafe content detected
falsediscrimination_probabilityfloatProbability of discrimination/hate speech (0.0-1.0)
0.02harassment_probabilityfloatProbability of harassment content (0.0-1.0)
0.03sexual_probabilityfloatProbability of sexual content (0.0-1.0)
0.01illegal_probabilityfloatProbability of illegal activity content (0.0-1.0)
0.01dating_probabilityfloatProbability of inappropriate dating content (0.0-1.0)
0.02profanity_probabilityfloatProbability of profanity (0.0-1.0)
0.05max_probabilityfloatHighest probability across all categories (0.0-1.0)
0.05top_categorystringCategory with highest probability
safe{
"model": "voice-safety-detection",
"audio_url": "https://example.com/audio.mp3"
}{
"success": true,
"data": {
"is_unsafe": false,
"discrimination_probability": 0.02,
"harassment_probability": 0.03,
"sexual_probability": 0.01,
"illegal_probability": 0.01,
"dating_probability": 0.02,
"profanity_probability": 0.05,
"max_probability": 0.05,
"top_category": "safe"
}
}429 HTTP error code along with an error message. You should then retry with an exponential back-off strategy, meaning that you should retry after 4 seconds, then 8 seconds, then 16 seconds, etc.Integrate Voice Safety Detection into your application today with our easy-to-use API.