Detector24
Face Liveness Detection
VideoFace / People Related

Face Liveness Detection

iBeta Level 2 certified face liveness detection. Prevent spoofing attacks from photos, videos, and masks in biometric authentication systems.

Accuracy
98.2%
Avg. Speed
2.1s
Per Minute
$0.0240
API Name
face-liveness

Bynn Face Liveness Detection

The Bynn Face Liveness Detection model provides passive anti-spoofing for biometric authentication systems. It determines whether the face in a video belongs to a live person physically present or is a presentation attack—such as a printed photo, 3D mask, or screen replay. The model is iBeta Level 2 compliant, meeting the most rigorous standards required for secure identity verification.

The Challenge

Facial recognition has become the gold standard for identity verification. Banks use it for account access. Airports use it for border control. Enterprises use it for secure facilities. But every facial recognition system is vulnerable to a fundamental weakness: it cannot inherently distinguish a live person from a representation of that person.

Presentation attacks have become devastatingly sophisticated. Attackers print high-resolution photos on premium paper. They display videos on 4K screens. They commission realistic 3D-printed masks. They use deepfake-generated videos. Each attack vector exploits the same vulnerability—the camera captures pixels, not presence. Without liveness detection, facial recognition is security theater.

The stakes are enormous. Account takeover fraud costs billions annually. A printed photo bypassing facial authentication can drain bank accounts, access medical records, or breach corporate systems. Identity theft victims spend years recovering from compromised biometric data—unlike passwords, you cannot change your face.

Traditional anti-spoofing approaches fail or frustrate users. "Blink detection" is trivially defeated by video playback. "Turn your head" instructions slow onboarding and confuse users, leading to abandonment. Challenge-response methods add friction that drives customers to competitors. Active liveness demands user cooperation that many refuse to provide.

Screen replay attacks are particularly insidious. An attacker records a victim's face—from social media, a video call, or surreptitious filming—then plays it back on a phone screen held to the camera. The displayed video includes natural motion, blinking, and expression changes. To a naive system, it looks alive. Detection requires analyzing subtle signals invisible to the human eye.

Print attacks remain common despite their simplicity. A color laser print of someone's face, especially with eye holes cut out to simulate blinking, defeats many systems. Paper texture, color gamut limitations, and flatness provide detection signals—but only if the system knows what to look for.

3D mask attacks target high-security systems. Silicone masks with realistic skin texture, custom-fitted to the victim's face shape, can fool systems that rely solely on 2D analysis. Detecting these requires understanding facial geometry—real faces have depth and curvature that flat representations lack.

Model Overview

The Bynn Face Liveness Detection model uses passive liveness—it requires no user action. No blinking on command, no head turning, no reading numbers aloud. Users simply look at the camera naturally while the system analyzes the video stream for authenticity signals.

The model achieves 98.2% accuracy with iBeta Level 2 compliance:

  • False Accept Rate (FAR): 0.36% — Spoofs almost never pass as real
  • False Reject Rate (FRR): 4.97% — Real users rarely rejected
  • ACER: 1.76% — Average Classification Error Rate
  • AUC: 0.9988 — Near-perfect discrimination

How It Works

The model analyzes multiple dimensions of the video to distinguish live faces from attacks:

  • 3D geometry analysis: Reconstructs facial depth to verify three-dimensional structure—real faces have contours, screens and prints are flat
  • Micro-texture detection: Identifies artifacts invisible to the human eye—screen pixel patterns, paper grain, printing artifacts
  • Temporal consistency: Analyzes motion patterns across frames to detect frozen images, unnatural movements, or video replay artifacts
  • Face-only analysis: Examines skin pixels exclusively, ignoring backgrounds and device bezels that could provide misleading signals

Input Requirements

The model requires sufficient video data for reliable analysis:

  • Video duration: Minimum 5 seconds recommended for optimal accuracy
  • Frame sampling: 8 frames extracted using sparse temporal sampling across the video duration
  • Alternative input: Array of 8 face images if video is not available
  • Face visibility: Face must be clearly visible and unobstructed in all frames

Response Structure

The API returns comprehensive liveness analysis:

  • is_real: Boolean indicating liveness check passed (true = live person)
  • is_spoof: Boolean indicating presentation attack detected
  • real_probability: Confidence that face is live (0.0-1.0)
  • confidence: Overall detection confidence (0.0-1.0)
  • attack_type: Numeric code identifying attack category (0-3)
  • attack_type_name: Human-readable attack type:
    • real: Live person detected
    • print_2d: Printed photo attack
    • mask_3d: 3D mask attack
    • replay_screen: Screen replay attack
  • attack_type_confidence: Confidence in attack type classification (0.0-1.0)

Attack Types Detected

2D Print Attacks

  • High-resolution photo printouts
  • Magazine or newspaper clippings
  • ID card and document photos
  • Photos with cut-out eye holes

3D Mask Attacks

  • Silicone and latex masks
  • 3D-printed face replicas
  • Mannequin heads with printed faces
  • Paper craft and folded photo masks

Screen Replay Attacks

  • Phone and tablet screen displays
  • Monitor and TV screen replays
  • Pre-recorded video playback
  • Deepfake video presentations

Performance Metrics

Metric Value
Detection Accuracy 98.2%
iBeta Compliance Level 2
False Accept Rate (FAR) 0.36%
False Reject Rate (FRR) 4.97%
ACER 1.76%
AUC 0.9988
Average Response Time 2100ms
Max File Size 50MB
Minimum Video Duration 5 seconds (8 frames)
Supported Formats MP4, MOV, AVI, WebM, MKV

Use Cases

  • KYC Onboarding: Verify that identity document photos match a live person during account opening
  • Banking Authentication: Secure high-value transactions with biometric verification that resists spoofing
  • Remote Identity Verification: Ensure the person completing verification is physically present, not submitting recordings
  • Access Control: Protect physical and digital access points from presentation attacks
  • Healthcare Verification: Confirm patient identity for telemedicine and prescription services
  • Age Verification: Ensure the verified individual is present, not using someone else's identity
  • Exam Proctoring: Verify test-takers are who they claim to be throughout remote examinations
  • Re-authentication: Confirm returning users during step-up authentication for sensitive operations

Known Limitations

Important Considerations:

  • Video quality: Very low resolution or heavily compressed video may reduce accuracy
  • Lighting conditions: Extreme lighting (very dark or overexposed) affects detection reliability
  • Face visibility: Partial occlusion (masks, hands, hair) may impact analysis
  • Motion blur: Excessive camera shake or rapid movement degrades frame quality
  • Novel attacks: Highly sophisticated or novel attack methods may require model updates

Disclaimers

This model provides probability-based liveness detection as part of a defense-in-depth security strategy.

  • Layered Security: Combine with deepfake detection, facial recognition, document verification, and other identity signals for robust verification
  • Threshold Configuration: Adjust acceptance thresholds based on your security requirements and user experience tolerance
  • False Rejection Handling: Provide retry mechanisms and fallback verification paths for legitimate users who fail liveness checks
  • Regulatory Compliance: Ensure your implementation meets applicable biometric data regulations (GDPR, BIPA, etc.)
  • Continuous Monitoring: Monitor for new attack vectors and update defenses as threats evolve

Best Practice: Deploy liveness detection alongside deepfake detection for comprehensive anti-spoofing. Liveness catches physical presentation attacks (photos, masks, screens), while deepfake detection catches AI-generated synthetic faces. Combine both with document authentication, facial matching, device fingerprinting, and behavioral analysis. No single control is foolproof—security comes from defense in depth.

API Reference

Version
2601
Jan 3, 2026
Avg. Processing
2.1s
Per Minute
$0.024
Required Plan
trial

Input Parameters

Face anti-spoofing for biometric authentication (requires 8 frames for temporal analysis)

video_urlstring

URL of video for liveness detection (preferred)

Example:
https://example.com/selfie.mp4
image_urlstring|array

Single video URL or array of 8 frame images

Example:
https://example.com/video.mp4

Response Fields

Liveness detection result with attack type identification

is_realboolean

True if real face detected (liveness pass)

Example:
true
is_spoofboolean

True if presentation attack detected

Example:
false
real_probabilityfloat

Confidence that face is real (0.0-1.0)

Example:
0.98
confidencefloat

Overall detection confidence (0.0-1.0)

Example:
0.95
attack_typeinteger

Numeric attack type code

Example:
0
attack_type_namestring

Human-readable attack type

Example:
real
attack_type_confidencefloat

Confidence in attack type classification

Example:
0.92

Complete Example

Request

{
  "model": "face-liveness",
  "video_url": "https://example.com/selfie.mp4"
}

Response

{
  "inference_id": "inf_xyz789abc123def456",
  "model_id": "face_liveness",
  "model_name": "Face Liveness Detection",
  "moderation_type": "video",
  "status": "completed",
  "result": {
    "is_real": true,
    "is_spoof": false,
    "real_probability": 0.98,
    "confidence": 0.95,
    "attack_type": 0,
    "attack_type_name": "real",
    "attack_type_confidence": 0.92
  },
  "response_time_ms": 2100,
  "created_at": "2026-02-07T10:04:55Z",
  "completed_at": "2026-02-07T10:04:57Z"
}

Additional Information

Rate Limiting
If we throttle your request, you will receive a 429 HTTP error code along with an error message. You should then retry with an exponential back-off strategy, meaning that you should retry after 4 seconds, then 8 seconds, then 16 seconds, etc.
Supported Formats
mp4, mov, avi, webm, mkv
Maximum File Size
50MB
Tags:livenessanti-spoofingbiometricsecurity

Ready to get started?

Integrate Face Liveness Detection into your application today with our easy-to-use API.