Rate Limits
FloImg API rate limits are per-minute and vary by plan. Understanding these limits helps you build reliable integrations.
Limits by Plan
Section titled “Limits by Plan”| Plan | Requests/Minute | Burst Capacity |
|---|---|---|
| Starter | 60 | 10 |
| Pro | 300 | 50 |
| Enterprise | Custom | Custom |
Burst capacity allows brief spikes above your normal limit. Sustained traffic above your limit will be rate-limited.
Rate Limit Headers
Section titled “Rate Limit Headers”Every API response includes rate limit headers:
HTTP/1.1 200 OKX-RateLimit-Limit: 60X-RateLimit-Remaining: 45X-RateLimit-Reset: 1705312800| Header | Description |
|---|---|
X-RateLimit-Limit | Your plan’s requests per minute |
X-RateLimit-Remaining | Requests remaining in current window |
X-RateLimit-Reset | Unix timestamp when the window resets |
Handling Rate Limits
Section titled “Handling Rate Limits”When you exceed your rate limit, the API returns 429 Too Many Requests:
{ "error": { "code": "rate_limited", "message": "Rate limit exceeded. Retry after 15 seconds.", "retryAfter": 15 }}The Retry-After header tells you how long to wait:
HTTP/1.1 429 Too Many RequestsRetry-After: 15Best Practices
Section titled “Best Practices”1. Implement Exponential Backoff
Section titled “1. Implement Exponential Backoff”When rate-limited, wait before retrying and increase the delay on subsequent failures:
async function executeWithRetry( fn: () => Promise<Response>, maxRetries = 3) { let delay = 1000; // Start with 1 second
for (let attempt = 0; attempt < maxRetries; attempt++) { const response = await fn();
if (response.status === 429) { const retryAfter = response.headers.get('Retry-After'); const waitTime = retryAfter ? parseInt(retryAfter) * 1000 : delay;
await new Promise(r => setTimeout(r, waitTime)); delay *= 2; // Double the delay continue; }
return response; }
throw new Error('Max retries exceeded');}2. Monitor Your Usage
Section titled “2. Monitor Your Usage”Check remaining requests before making calls:
async function makeRequest(url: string) { const response = await fetch(url, { headers: { Authorization: `Bearer ${apiKey}` } });
const remaining = response.headers.get('X-RateLimit-Remaining');
if (parseInt(remaining || '0') < 5) { console.warn('Rate limit nearly exhausted'); }
return response;}3. Use Async Execution for Batches
Section titled “3. Use Async Execution for Batches”For bulk operations, use async workflow execution to avoid blocking:
// Start multiple executionsconst executions = await Promise.all( items.map(item => fetch('/v1/workflows/execute', { body: JSON.stringify({ workflowId: 'wf_abc', parameters: item, async: true // Don't wait for completion }) }) ));
// Poll for results laterfor (const exec of executions) { const { executionId } = await exec.json(); // Check status periodically}4. Cache Results
Section titled “4. Cache Results”Cache workflow outputs when possible to reduce API calls:
const cache = new Map();
async function getOrExecute(workflowId: string, params: object) { const cacheKey = `${workflowId}:${JSON.stringify(params)}`;
if (cache.has(cacheKey)) { return cache.get(cacheKey); }
const result = await executeWorkflow(workflowId, params); cache.set(cacheKey, result);
return result;}Upgrading Your Limit
Section titled “Upgrading Your Limit”If you consistently hit rate limits:
- Optimize your integration - Batch requests, cache results, use async execution
- Upgrade your plan - Pro offers 5x the rate limit of Starter
- Contact sales - Enterprise plans offer custom limits for high-volume use cases