
Detect face occlusions from masks, sunglasses, hands, and objects. Ensure clear facial visibility for identity verification and biometric systems.
The Bynn Face Occlusion Detection model determines whether a person's face is occluded or obscured in an image. This model is essential for identity verification workflows where clear face visibility is required for accurate identification.
Identity verification fails silently when faces are obscured. Users submit photos wearing sunglasses, masks, or hats. Images are blurry, poorly lit, or awkwardly cropped. Facial recognition systems return low confidence scores or false matches, but cannot explain why—leading to frustrated users and support escalations.
Early detection of face occlusion prevents wasted processing and poor user experience. Rather than running expensive facial recognition on unsuitable images, platforms can immediately prompt users to resubmit with clear face visibility. This quality gate reduces verification failures, speeds processing, and improves completion rates for onboarding flows.
Beyond identity verification, face occlusion detection enables critical security applications. CCTV systems can flag individuals deliberately concealing their identity—ski masks, balaclavas, or face coverings inappropriate for the environment. Early detection of masked individuals in banks, schools, or retail environments can trigger alerts before incidents escalate, potentially preventing robberies or violent attacks.
When provided with an image containing a person, the detector analyzes whether the face is clearly visible or obstructed by various factors including masks, sunglasses, hand positions, image quality issues, or framing problems.
Achieving 93.0% accuracy, the model uses Bynn's Visual Language Model technology to understand both physical obstructions and image quality factors that prevent clear face visibility.
The model evaluates multiple factors that can obstruct face visibility:
The API returns a structured JSON response containing:
Face is considered occluded when:
Face is considered not occluded when:
| Metric | Value |
|---|---|
| Detection Accuracy | 93.0% |
| Average Response Time | 15,000ms |
| Max File Size | 20MB |
| Supported Formats | GIF, JPEG, JPG, PNG, WebP |
Important Considerations:
This model provides face visibility assessment, not identity verification.
Best Practice: Integrate occlusion detection early in verification workflows to provide immediate feedback and improve submission quality.
Vision Language Model for image/video understanding with reasoning
media_typestringType of media being sent: 'image' or 'video'. Auto-detected if not specified.
imageimage_urlstringURL of image to analyze
https://example.com/image.jpgbase64_imagestringBase64-encoded image data
video_urlstringURL of video to analyze
https://example.com/video.mp4base64_videostringBase64-encoded video data
Structured Face Occlusion Detection response
responseobjectStructured response from the model
occludedbooleanthinkingstringChain-of-thought reasoning from the model (may be empty)
{
"model": "face-occlusion-detection",
"image_url": "https://example.com/image.jpg"
}{
"inference_id": "inf_abc123def456",
"model_id": "face_occlusion_detection",
"model_name": "Face Occlusion Detection",
"moderation_type": "image",
"status": "completed",
"result": {
"response": {
"occluded": false
},
"thinking": ""
}
}429 HTTP error code along with an error message. You should then retry with an exponential back-off strategy, meaning that you should retry after 4 seconds, then 8 seconds, then 16 seconds, etc.Integrate Face Occlusion Detection into your application today with our easy-to-use API.