Safety at FloImg

We take content safety seriously. Every AI-generated image is scanned before it's saved.

100%
AI images scanned
Before Save
Moderation timing
11
Categories checked

Our Approach: Scan Before Save

Nothing touches disk without passing moderation. When you generate an image with FloImg Studio, it's scanned for harmful content before being saved. If content is flagged, the save is blocked and the incident is logged.

Generate Image
Moderation Scan
Pass → Save
Fail → Block

How We Moderate

We use OpenAI's Moderation API, an industry-standard service that analyzes images for harmful content across multiple categories.

Categories Checked

Sexual content
Sexual content involving minors
Hate speech
Threatening hate speech
Harassment
Threatening harassment
Self-harm content
Self-harm intent/instructions
Violence
Graphic violence

What Happens When Content is Flagged

  1. 1
    Save Blocked

    The image is not written to disk.

  2. 2
    Incident Logged

    Details are recorded for admin review and compliance.

  3. 3
    Error Returned

    You receive a "Content policy violation" message.

Cloud vs Self-Hosted

FloImg Studio Cloud

  • Moderation always enabled
  • Strict mode: blocks on API failures
  • We handle everything for you

Self-Hosted

  • Requires your own OpenAI API key
  • Permissive mode by default
  • You control your moderation settings

Technical Details

For developers evaluating FloImg, here's how moderation works under the hood:

Model omni-moderation-latest
Supported Formats PNG, JPEG, GIF, WebP
SVG Handling Converted to PNG via Resvg, then scanned
Incident Logs JSONL format for compliance auditing
Source Code View on GitHub

Frequently Asked Questions

What if my content is flagged incorrectly?

No moderation system is perfect. If you believe your content was incorrectly flagged, contact us at support@goflojo.com. We're working on a formal appeals process.

Do you store flagged images?

No. Flagged content is never written to disk. We only log metadata (timestamp, category, context) for incident review—not the actual image.

Can I disable moderation?

On FloImg Studio Cloud, moderation is always enabled for platform safety. Self-hosted users can choose not to configure an OpenAI API key, which disables moderation.

Is the moderation code open source?

Yes. FloImg Studio is MIT licensed. You can review exactly how moderation works in our GitHub repository.

Questions about our safety practices?

Last updated: December 30, 2025