
iBeta Level 2 certified face liveness detection. Prevent spoofing attacks from photos, videos, and masks in biometric authentication systems.
The Bynn Face Liveness Detection model provides passive anti-spoofing for biometric authentication systems. It determines whether the face in a video belongs to a live person physically present or is a presentation attack—such as a printed photo, 3D mask, or screen replay. The model is iBeta Level 2 compliant, meeting the most rigorous standards required for secure identity verification.
Facial recognition has become the gold standard for identity verification. Banks use it for account access. Airports use it for border control. Enterprises use it for secure facilities. But every facial recognition system is vulnerable to a fundamental weakness: it cannot inherently distinguish a live person from a representation of that person.
Presentation attacks have become devastatingly sophisticated. Attackers print high-resolution photos on premium paper. They display videos on 4K screens. They commission realistic 3D-printed masks. They use deepfake-generated videos. Each attack vector exploits the same vulnerability—the camera captures pixels, not presence. Without liveness detection, facial recognition is security theater.
The stakes are enormous. Account takeover fraud costs billions annually. A printed photo bypassing facial authentication can drain bank accounts, access medical records, or breach corporate systems. Identity theft victims spend years recovering from compromised biometric data—unlike passwords, you cannot change your face.
Traditional anti-spoofing approaches fail or frustrate users. "Blink detection" is trivially defeated by video playback. "Turn your head" instructions slow onboarding and confuse users, leading to abandonment. Challenge-response methods add friction that drives customers to competitors. Active liveness demands user cooperation that many refuse to provide.
Screen replay attacks are particularly insidious. An attacker records a victim's face—from social media, a video call, or surreptitious filming—then plays it back on a phone screen held to the camera. The displayed video includes natural motion, blinking, and expression changes. To a naive system, it looks alive. Detection requires analyzing subtle signals invisible to the human eye.
Print attacks remain common despite their simplicity. A color laser print of someone's face, especially with eye holes cut out to simulate blinking, defeats many systems. Paper texture, color gamut limitations, and flatness provide detection signals—but only if the system knows what to look for.
3D mask attacks target high-security systems. Silicone masks with realistic skin texture, custom-fitted to the victim's face shape, can fool systems that rely solely on 2D analysis. Detecting these requires understanding facial geometry—real faces have depth and curvature that flat representations lack.
The Bynn Face Liveness Detection model uses passive liveness—it requires no user action. No blinking on command, no head turning, no reading numbers aloud. Users simply look at the camera naturally while the system analyzes the video stream for authenticity signals.
The model achieves 98.2% accuracy with iBeta Level 2 compliance:
The model analyzes multiple dimensions of the video to distinguish live faces from attacks:
The model requires sufficient video data for reliable analysis:
The API returns comprehensive liveness analysis:
| Metric | Value |
|---|---|
| Detection Accuracy | 98.2% |
| iBeta Compliance | Level 2 |
| False Accept Rate (FAR) | 0.36% |
| False Reject Rate (FRR) | 4.97% |
| ACER | 1.76% |
| AUC | 0.9988 |
| Average Response Time | 2100ms |
| Max File Size | 50MB |
| Minimum Video Duration | 5 seconds (8 frames) |
| Supported Formats | MP4, MOV, AVI, WebM, MKV |
Important Considerations:
This model provides probability-based liveness detection as part of a defense-in-depth security strategy.
Best Practice: Deploy liveness detection alongside deepfake detection for comprehensive anti-spoofing. Liveness catches physical presentation attacks (photos, masks, screens), while deepfake detection catches AI-generated synthetic faces. Combine both with document authentication, facial matching, device fingerprinting, and behavioral analysis. No single control is foolproof—security comes from defense in depth.
Face anti-spoofing for biometric authentication (requires 8 frames for temporal analysis)
video_urlstringURL of video for liveness detection (preferred)
https://example.com/selfie.mp4image_urlstring|arraySingle video URL or array of 8 frame images
https://example.com/video.mp4Liveness detection result with attack type identification
is_realbooleanTrue if real face detected (liveness pass)
trueis_spoofbooleanTrue if presentation attack detected
falsereal_probabilityfloatConfidence that face is real (0.0-1.0)
0.98confidencefloatOverall detection confidence (0.0-1.0)
0.95attack_typeintegerNumeric attack type code
0attack_type_namestringHuman-readable attack type
realattack_type_confidencefloatConfidence in attack type classification
0.92{
"model": "face-liveness",
"video_url": "https://example.com/selfie.mp4"
}{
"inference_id": "inf_xyz789abc123def456",
"model_id": "face_liveness",
"model_name": "Face Liveness Detection",
"moderation_type": "video",
"status": "completed",
"result": {
"is_real": true,
"is_spoof": false,
"real_probability": 0.98,
"confidence": 0.95,
"attack_type": 0,
"attack_type_name": "real",
"attack_type_confidence": 0.92
},
"response_time_ms": 2100,
"created_at": "2026-02-07T10:04:55Z",
"completed_at": "2026-02-07T10:04:57Z"
}429 HTTP error code along with an error message. You should then retry with an exponential back-off strategy, meaning that you should retry after 4 seconds, then 8 seconds, then 16 seconds, etc.Integrate Face Liveness Detection into your application today with our easy-to-use API.