
Rate videos as G, PG-13, R, or Adult based on nudity, violence, and language. Automated content classification for streaming and social platforms.
The Bynn Video Content Rating model analyzes videos to assign age-appropriate ratings based on nudity, violence, and language. This model provides temporal localization identifying exactly when rating-affecting content appears.
Video content rating presents unique challenges that static image analysis cannot address. A two-hour film may contain a single scene that elevates its rating from PG to R. User-generated videos may start innocently before transitioning to inappropriate content. Streaming platforms, social media, and video hosting services must rate millions of hours of content to meet regulatory requirements and user expectations.
Manual review is prohibitively expensive and slow. A single moderator cannot watch every video in real-time. Platforms need automated rating that not only classifies content but identifies precisely when and where rating-affecting material appears—enabling efficient human review of flagged segments rather than entire videos.
For CCTV and surveillance applications, real-time content rating enables automated monitoring of live feeds. Security systems can flag inappropriate behavior as it occurs—nudity, violence, or other policy violations—without requiring constant human observation. Public venues, transportation systems, and facilities can maintain standards automatically, with human operators alerted only when intervention is needed.
When provided with a video, the model evaluates multiple content dimensions including nudity levels, violence, and audio/visual profanity. It provides both an overall rating and a timeline of events that contributed to the rating, enabling precise content moderation and editing decisions.
Achieving 92.0% accuracy, the model uses Bynn's Visual Language Model technology to perform comprehensive video analysis with contextual understanding of content appropriateness standards.
The model applies core rating rules consistently across video content:
The API returns a structured JSON response containing:
Family-safe content: wholesome, educational, everyday content. People fully covered or in normal attire with no sexual emphasis. No wounds, blood, fighting, or crude language.
Safe content that should NOT be escalated:
| Metric | Value |
|---|---|
| Classification Accuracy | 92.0% |
| Average Response Time | 20,000ms |
| Max File Size | 100MB |
| Supported Formats | MP4, MOV, AVI, WebM, MKV |
Important Considerations:
This model provides probability-based classifications, not official content ratings.
Best Practice: Use the events timeline to efficiently review flagged content and make informed rating decisions.
Vision Language Model for image/video understanding with reasoning
media_typestringType of media being sent: 'image' or 'video'. Auto-detected if not specified.
imageimage_urlstringURL of image to analyze
https://example.com/image.jpgbase64_imagestringBase64-encoded image data
video_urlstringURL of video to analyze
https://example.com/video.mp4base64_videostringBase64-encoded video data
Structured Content Rating response
responseobjectStructured response from the model
eventsarrayratingstringgeneral_audienceteen_13_plusteen_16_plusadult_18_plusreasonstringconfidencestringlowmediumhighthinkingstringChain-of-thought reasoning from the model (may be empty)
{
"model": "video-content-rating",
"image_url": "https://example.com/image.jpg"
}{
"inference_id": "inf_abc123def456",
"model_id": "video_content_rating",
"model_name": "Content Rating",
"moderation_type": "video",
"status": "completed",
"result": {
"response": {
"events": null,
"rating": "general_audience",
"reason": "example_reason",
"confidence": "low"
},
"thinking": ""
}
}429 HTTP error code along with an error message. You should then retry with an exponential back-off strategy, meaning that you should retry after 4 seconds, then 8 seconds, then 16 seconds, etc.Integrate Content Rating into your application today with our easy-to-use API.