
Locate AI-edited regions in images with precise bounding boxes. Catches surgical edits — altered receipt totals, faked insurance damage, swapped ID fields — that whole-image AI detectors miss. Detects edits from Nano-Banana, Flux Kontext, GPT-4o, Qwen-Image-Edit, Bagel, Step1X-Edit, TextFlux and more.
The Bynn AI Edited Image Forgery Detection model identifies which specific regions of an image have been altered by modern AI image editors. Unlike whole-image AI detectors that only tell you whether an image looks synthetic, this model draws a precise bounding box around every region that has been repainted — so investigators can see exactly what changed: which dollar amount on a receipt, which scratch on a car, which line item on an invoice, which field on a driver's license.
Image manipulation used to require Photoshop skills, time, and a careful eye. In 2026 it requires a sentence. A user opens Google Nano-Banana, Flux Kontext, GPT-4o Image, Qwen-Image-Edit, Bagel, Step1X-Edit, or TextFlux, uploads a real photograph, types "change the total to $4,800" or "add a dent to the front bumper", and seconds later receives a new image where only that one region has been seamlessly repainted. The lighting matches. The shadows match. The texture matches. Every pixel outside the edit is byte-identical to the original. To the human eye, the result is indistinguishable from an authentic photograph.
This has rewired the economics of fraud. Insurance carriers report a sharp rise in suspicious claim photos: hailstorm damage added to a roof that was inspected clean weeks earlier, fender dents appearing only in the submitted photo, water damage extending exactly to where a covered policy boundary ends. Each edit takes 30 seconds and a $20 subscription. Each successful claim pays thousands. The math overwhelmingly favors the fraudster.
Expense and accounts payable fraud has followed the same curve. A consultant inflates a $80 client dinner to $480 by editing one digit. A contractor submits a vendor invoice where the line item descriptions are real but the totals have been raised by 15%. A traveler submits a hotel receipt where the dates have been shifted to fall inside a covered trip window. The receipts look authentic because they are authentic — only one number changed. Manual review cannot keep up; the edits are surgical, the originals are unavailable for comparison, and finance teams approve thousands of receipts a month.
Identity fraud has the highest stakes. AI editors can change a date of birth on a passport, swap a photo on a driver's license, alter the address on a utility bill, or modify the issue date on an ID — without disturbing any of the security features, fonts, or background graphics that normally trip up forgers. KYC reviewers see a document that passes every visual check; only the one piece of information that mattered to the fraudster has been changed.
Editorial integrity is at risk too. A news photo where one person has been edited out of a sensitive scene. A product photo on a marketplace where a defect has been removed. A dating profile where blemishes, tattoos, or accessories have been quietly altered. A court exhibit where a key piece of evidence has been added or erased. Conventional AI-image detectors, trained to recognize fully synthetic images, look at these photos and report "authentic" — because 99% of the pixels really are. The edit hides in the 1%.
Detecting these surgical edits requires a different kind of model: one that does not just classify the whole image, but actively localizes which pixels were repainted. That is what Bynn AI Edited Image Forgery Detection does.
"forged" or "no_forgery_detected". The negative label only means this model found no edits; it is not a certification of authenticity.[x, y, width, height]Example response:
{
"is_forged": true,
"forgery_probability": 0.997,
"confidence": 0.886,
"label": "forged",
"num_regions": 1,
"regions": [
{
"x": 527, "y": 972, "width": 121, "height": 41,
"bbox": [527, 972, 121, 41],
"confidence": 0.997,
"mean_probability": 0.886,
"area": 3881
}
]
}Code
Evaluated on a held-out test split spanning nine editor families with threshold-calibrated inference (best F1 operating point):
| Metric | Score |
|---|---|
| F1 | 0.7447 |
| IoU | 0.5932 |
| Precision | 0.7429 |
| Recall | 0.7464 |
Per-editor IoU:
| Editor | F1 | IoU |
|---|---|---|
| Qwen-Image-Edit (Non-Asian portraits) | 0.8672 | 0.7656 |
| Qwen-Image-Edit (Asian portraits) | 0.8671 | 0.7654 |
| Gemini + Ideogram + GPT-Image | 0.8116 | 0.6829 |
| Flux Kontext | 0.7484 | 0.5980 |
| Nano-Banana | 0.6031 | 0.4318 |
| TextFlux | 0.5646 | 0.3934 |
| Bagel | 0.5517 | 0.3810 |
| GPT-4o | 0.4374 | 0.2800 |
Best practice: Run AI-Generated Image Detection first to catch fully synthetic images. For images that pass the synthetic check, run this model to surface the surgical edits that whole-image classifiers cannot see. Together they cover both ends of the AI-fraud spectrum — pure synthesis and prompt-driven retouching.
Pixel-level localization of AI-edited regions in images. Triple-stream ICL-Net trained on PromptForge-350k.
image_urlstringURL of the image to analyze for AI-edited regions
https://example.com/image.jpgbase64_imagestringBase64-encoded image data
/9j/4AAQSkZJRgABAQAA...Forgery localization with binary verdict and region bounding boxes
is_forgedbooleanTrue if AI editing was detected anywhere in the image. False means this model found no evidence of editing — NOT a guarantee that the image is authentic.
trueforgery_probabilityfloatProbability that the image contains AI edits (0.0-1.0)
0.87confidencefloatModel confidence in the classification (0.0-1.0)
0.91labelstringClassification label. "no_forgery_detected" only means this model found no evidence of editing — pair with AI-Generated Image Detection and Document Tampering Detection for a full picture.
forgednum_regionsintegerNumber of distinct edited regions detected
2regionsarrayBounding boxes for edited regions. Each region: x, y, width, height, bbox [x, y, w, h], confidence (peak per-pixel probability), mean_probability, area (forged pixel count).
[
{
"x": 100,
"y": 150,
"width": 200,
"height": 120,
"bbox": [
100,
150,
200,
120
],
"confidence": 0.99,
"mean_probability": 0.88,
"area": 3881
}
]{
"model": "ai-edited-image-forgery",
"image_url": "https://example.com/edited_photo.jpg"
}{
"success": true,
"data": {
"is_forged": true,
"forgery_probability": 0.87,
"confidence": 0.91,
"label": "forged",
"num_regions": 2,
"regions": [
{
"x": 100,
"y": 150,
"width": 200,
"height": 120,
"bbox": [
100,
150,
200,
120
],
"confidence": 0.99,
"mean_probability": 0.88,
"area": 3881
},
{
"x": 420,
"y": 80,
"width": 90,
"height": 60,
"bbox": [
420,
80,
90,
60
],
"confidence": 0.95,
"mean_probability": 0.81,
"area": 1240
}
]
}
}429 HTTP error code along with an error message. You should then retry with an exponential back-off strategy, meaning that you should retry after 4 seconds, then 8 seconds, then 16 seconds, etc.Integrate AI Edited Image Forgery Detection into your application today with our easy-to-use API.