Ollama
The Ollama plugin enables local AI capabilities using Ollama, with no cloud API required.
Powered by Ollama
This plugin integrates with Ollama running locally—no cloud API, no data leaves your machine. See the Ollama documentation for available models.
Installation
Section titled “Installation”pnpm add @teamflojo/floimg-ollamaPrerequisites
Section titled “Prerequisites”Install and run Ollama locally:
# Install Ollama (macOS)brew install ollama
# Pull required modelsollama pull llava # For visionollama pull llama3.2 # For text
# Start the Ollama serverollama serveRegistration
Section titled “Registration”import createClient from '@teamflojo/floimg';import ollama from '@teamflojo/floimg-ollama';
const floimg = createClient();
floimg.registerGenerator(ollama({ baseUrl: 'http://localhost:11434' // Default Ollama URL}));Vision Analysis (LLaVA)
Section titled “Vision Analysis (LLaVA)”Analyze images using LLaVA locally.
const analysis = await floimg.analyzeImage({ blob: image, prompt: 'Describe this image in detail'});
console.log(analysis.text);// "The image shows a cozy living room with..."Vision Parameters
Section titled “Vision Parameters”| Parameter | Type | Required | Description |
|---|---|---|---|
model | string | No | Vision model (default: llava) |
prompt | string | Yes | Question or instruction about the image |
Text Generation (Llama)
Section titled “Text Generation (Llama)”Generate text using Llama locally.
const result = await floimg.generateText({ prompt: 'Write a caption for a beach sunset photo', model: 'llama3.2'});
console.log(result.text);// "Golden hour magic at the beach..."Text Parameters
Section titled “Text Parameters”| Parameter | Type | Required | Description |
|---|---|---|---|
model | string | No | Text model (default: llama3.2) |
prompt | string | Yes | The prompt for text generation |
Available Models
Section titled “Available Models”| Model | Type | Size | Use Case |
|---|---|---|---|
llava | Vision | 7B | Image analysis, description |
llava:13b | Vision | 13B | Higher quality analysis |
llama3.2 | Text | 3B | Fast text generation |
llama3.2:7b | Text | 7B | Better quality text |
Pull additional models with:
ollama pull llava:13bollama pull llama3.2:7bExample Pipeline
Section titled “Example Pipeline”import createClient from '@teamflojo/floimg';import ollama from '@teamflojo/floimg-ollama';import qr from '@teamflojo/floimg-qr';
const floimg = createClient();floimg.registerGenerator(ollama());floimg.registerGenerator(qr());
// 1. Analyze an image locallyconst analysis = await floimg.analyzeImage({ blob: productImage, prompt: 'Describe this product for an e-commerce listing'});
// 2. Generate a captionconst caption = await floimg.generateText({ prompt: `Write a short marketing tagline based on: ${analysis.text}`});
// 3. Create a QR code linking to the productconst qrCode = await floimg.generate({ generator: 'qr', params: { data: 'https://example.com/product/123' }});
console.log(caption.text);Benefits
Section titled “Benefits”- Privacy: All processing happens locally
- No API Costs: No per-request charges
- Offline: Works without internet connection
- Fast: No network latency for local inference