
Automatically blur young faces (under 23) in images. Uses a safety margin to ensure minors are protected even with age estimation uncertainty.
Selectively blurs young faces (under 23 years old) while keeping older adult faces visible. Uses a safety margin to ensure minors are protected.
Child privacy protection has become one of the most critical and legally consequential challenges in digital content. Every major jurisdiction has enacted specific protections for minors' images. COPPA in the United States prohibits collecting personal information from children under 13 without parental consent—and faces are biometric identifiers. GDPR imposes stricter requirements for processing children's data, requiring explicit parental consent. The UK's Age Appropriate Design Code mandates privacy-by-default for services likely to be accessed by children. Non-compliance carries fines in the tens of millions and potential criminal liability.
News organizations face an impossible editorial dilemma. A school shooting, a youth sports championship, a community event—all newsworthy stories that necessarily involve minors. Publishing standards across most democracies prohibit identifying child victims without parental consent. International standards prevent identifying juvenile suspects. But manually identifying and blurring every minor in breaking news footage while competitors publish creates competitive disadvantage. The choice between legal compliance and being first to publish shouldn't require choosing one.
Social media platforms process billions of images containing children daily. Parents share photos of birthday parties where other children appear. Users post vacation photos with kids in the background. Event attendees upload images from school functions. Each image creates potential liability—the uploading user may have their own children's consent (debatable) but certainly not consent from every other child in frame. Platforms that host these images become data processors under privacy law, sharing responsibility for protecting minors who appear without their guardians' knowledge.
School and youth organizations generate enormous quantities of imagery requiring selective redaction. Sports leagues photograph games where opposing teams' minors appear. Schools document events for yearbooks and newsletters. Youth programs create training materials. Religious organizations publish bulletins. Each image may contain children from multiple families with varying consent levels. Some parents want their children featured; others explicitly opt out. Blanket redaction of all faces destroys the content's value. Manual selective redaction based on consent lists is error-prone and prohibitively expensive.
Child exploitation prevention adds another dimension of urgency. Images of minors can be misused for targeting, trafficking, and exploitation. Even seemingly innocent images enable predatory behavior when faces can be searched and identified. Social engineering attacks target children identified from public imagery. Reducing the availability of identifiable minor faces in public content serves as a protective measure against threats most parents never anticipate when posting family photos.
The selective nature of minor-only redaction creates unique technical challenges that blanket redaction doesn't face. The system must accurately estimate age from facial features—a task with inherent uncertainty, especially around the critical 17-18 boundary. False negatives (missing a minor) create legal liability. False positives (blurring adults) create user complaints. The 23-year threshold provides a safety margin that accounts for age estimation uncertainty while minimizing unnecessary adult redaction.
This model uses a 23-year age threshold instead of 18 to provide a safety margin for age estimation uncertainty. AI age estimation has an inherent margin of error (~5 years), so blurring faces estimated under 23 ensures that actual minors (under 18) are protected even when the model slightly overestimates their age. This conservative approach prioritizes child protection over precision.
| Parameter | Value |
|---|---|
| Age Threshold | < 23 years (safety margin) |
| Blur Method | Gaussian blur (sigma=20) |
| Confidence Threshold | 0.2 |
| Output Format | PNG |
| Max File Size | 20MB |
| Supported Formats | GIF, JPEG, JPG, PNG, WebP |
To blur all faces regardless of age, see Face Redaction.
Blur faces in images for privacy protection.
image_urlstringURL of image to process
base64_imagestringBase64-encoded image data
png_image_base64stringProcessed image with blurred faces (PNG, base64-encoded)
faces_detectedintegerTotal faces found in the image
faces_redactedintegerNumber of faces that were blurred
image_sizeobjectOriginal image dimensions { width, height }
redacted_facesarrayDetails of each blurred face including bbox and confidence
{
"model": "face-redaction-minors",
"image_url": "https://example.com/photo.jpg"
}{
"success": true,
"data": {
"png_image_base64": "<base64-data>",
"faces_detected": 3,
"faces_redacted": 1,
"image_size": {
"width": 1920,
"height": 1080
},
"redacted_faces": [
{
"bbox": {
"x1": 100,
"y1": 100,
"x2": 200,
"y2": 200
},
"age": 15.2,
"is_minor": true,
"confidence": 0.95
}
]
}
}429 HTTP error code along with an error message. You should then retry with an exponential back-off strategy, meaning that you should retry after 4 seconds, then 8 seconds, then 16 seconds, etc.Integrate Face Redaction of Minors into your application today with our easy-to-use API.