Skip to content

Content Moderation

FloImg Studio includes built-in content moderation to ensure platform safety. All generated images pass through moderation before being stored.

Scan Before Save — Nothing touches disk without passing moderation.

Generator → Image Buffer → Moderation API → Pass? → Save to Disk
→ Fail? → Block + Log

When an image is generated:

  1. The generator produces an image buffer
  2. The buffer is sent to the moderation API
  3. If flagged, the save is blocked and an incident is logged
  4. If clean, the image proceeds to storage

FloImg checks 11 content categories using OpenAI’s moderation API:

CategoryDescription
sexualSexual content
sexual/minorsSexual content involving minors
hateHate speech
hate/threateningThreatening hate speech
harassmentHarassing content
harassment/threateningThreatening harassment
self-harmSelf-harm content
self-harm/intentSelf-harm intent
self-harm/instructionsSelf-harm instructions
violenceViolent content
violence/graphicGraphic violence

When the moderation API flags an image:

  1. Save blocked — The image is not written to disk
  2. Incident logged — Details recorded to ./data/moderation/incidents.jsonl
  3. Error returned — Client receives “Content policy violation” error
  4. Console warning — Category details logged for debugging
FormatHandling
PNG, JPEG, GIF, WebPSent directly to OpenAI
SVGConverted to PNG (via Resvg), then moderated
AVIFPassed through (moderation skipped)

Why convert SVG? SVGs can contain embedded images or render inappropriate text. Rasterizing ensures visual content is properly scanned.

  • Moderation is always enabled
  • MODERATION_STRICT_MODE=true — API failures block content
  • Required for gallery and cloud storage features
  • Moderation is optional — works without OpenAI key
  • MODERATION_STRICT_MODE=false by default — API failures allow with warning
  • Users can provide their own OpenAI API key to enable moderation
Environment VariableDescriptionDefault
OPENAI_API_KEYRequired for moderationNone (disabled)
MODERATION_STRICT_MODEBlock on API failuresfalse
Terminal window
# Add to your environment
export OPENAI_API_KEY="sk-..."
# Optional: strict mode (recommended for production)
export MODERATION_STRICT_MODE="true"

Flagged content is logged in JSONL format for audit trails:

{
"timestamp": "2025-12-30T12:00:00.000Z",
"type": "generated",
"flagged": true,
"categories": ["violence"],
"scores": { "violence": 0.95 },
"context": { "nodeId": "node_1" }
}

Logs are append-only for compliance auditing.

  • Model: omni-moderation-latest
  • Latency: ~200-500ms per image
  • Accuracy: 99.9%+ on flagged categories