Documentation
Fluq gives you full observability and control over your AI agent fleet. Three integration paths — pick the one that fits.
Proxy (zero code)
One env var. All LLM calls observed.
📦SDK
Python & TypeScript. Full control.
🔌REST API
Any language. HTTP POST events.
#Quickstart
Get from zero to observable agents in 60 seconds.
Sign up at fluq.ai/get-started with Google, GitHub, or email. Free tier: 50K events/month.
After signup you'll get an API key starting with fo_. Save it — the full key is only shown once.
First, register an agent:
curl -X POST https://fluq.ai/api/v1/agents \
-H "Authorization: Bearer fo_YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{"name": "my-agent", "type": "builder"}'
# Returns: { "data": { "id": "xxxxxxxx-xxxx-..." } }Copy the id from the response and paste it as agentId below:
curl -X POST https://fluq.ai/api/v1/events/ingest \
-H "Authorization: Bearer fo_YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '[{
"agentId": "AGENT_UUID_FROM_ABOVE",
"eventType": "action",
"payload": { "description": "Hello from my agent" }
}]'Go to fluq.ai/dashboard — your event should appear in the live feed within seconds.
#Zero-Code Proxy
The fastest integration path. Set one environment variable and every LLM call flows through Fluq's proxy — automatic tracing, cost tracking, and policy enforcement with zero code changes.
#How it works
Fluq's proxy intercepts LLM calls by acting as a drop-in replacement for your provider's API base URL. Your API key and session ID are encoded in the URL path.
# Instead of:
export OPENAI_BASE_URL=https://api.openai.com/v1
# Use:
export OPENAI_BASE_URL=https://proxy.fluq.ai/_af/fo_YOUR_KEY/my-session/v1That's it. Every call your agent makes now appears in your dashboard with full tracing, token counts, cost estimates, and tool call extraction.
#Using the CLI wrapper
The fluq-cli run command wraps any process and sets the proxy URL automatically:
npx fluq-cli
# Wrap any agent command
fluq-cli run --key fo_YOUR_KEY -- python my_agent.py
fluq-cli run --key fo_YOUR_KEY -- node my_agent.js
# With a custom session name
fluq-cli run --key fo_YOUR_KEY --name "research-task-42" -- python agent.pyThe CLI sets OPENAI_BASE_URL and ANTHROPIC_BASE_URL to the proxy, so any SDK using these env vars works immediately.
#Supported providers
The proxy auto-detects the target provider from model names and headers:
- OpenAI —
gpt-4o,o1,o3, etc. - Anthropic —
claude-*models (detected viaanthropic-versionheader) - Google —
gemini-*models
#SDK Integration
Full-control SDK for Python and TypeScript. Use when you want structured traces, custom events, and fine-grained cost tracking.
#Python
pip install fluq-sdkfrom fluq import Fluq, FluqConfig, TraceInput, EventInput, EventType
fluq = Fluq()
fluq.init(FluqConfig(
api_key="fo_YOUR_KEY",
agent_id="research-agent",
capabilities=["llm", "web_search"],
base_url="https://fluq.ai"
))
async def run_research():
async with await fluq.trace(TraceInput(name="research-task")) as trace:
result = await call_llm("Summarize recent AI papers")
trace.event(EventInput(
event_type=EventType.LLM_CALL,
input={"prompt": "Summarize recent AI papers"},
output={"result": result},
tokens_in=150,
tokens_out=800,
estimated_cost_usd=0.003
))
await fluq.flush()#TypeScript
npm install fluq-sdkimport { Fluq, EventType } from "fluq-sdk";
const fluq = new Fluq();
await fluq.init({
apiKey: "fo_YOUR_KEY",
agentId: "research-agent",
capabilities: ["llm", "web_search"],
baseUrl: "https://fluq.ai",
});
const trace = await fluq.trace({ name: "research-task" });
await using _ = trace; // auto-closes
const result = await callLLM("Summarize recent AI papers");
trace.event({
eventType: EventType.LLM_CALL,
input: { prompt: "Summarize recent AI papers" },
output: { result },
tokensIn: 150,
tokensOut: 800,
estimatedCostUsd: 0.003,
});
await fluq.flush();#CLI — fluq-cli run
Auto-discovers running AI agents (tmux sessions, processes) and reports activity to your fleet dashboard.
npx fluq-cli
# Instrument your agent
fluq-cli run --key fo_YOUR_KEY -- <your-agent-command>
# Example: wrap a Claude Code session
fluq-cli run --key fo_YOUR_KEY -- claude --dangerously-skip-permissions "build feature"#REST API Reference
All API endpoints are at https://fluq.ai/api/v1/. Authenticate with Authorization: Bearer fo_YOUR_KEY. All request/response bodies are JSON.
#Event Ingestion
The core of Fluq — send events from your agents.
| Method | Path | Description |
|---|---|---|
| POST | /api/v1/events/ingest | Ingest one or more events (array body) |
| POST | /api/v1/events/ingest/bulk | Bulk event ingestion (large batches) |
| GET | /api/v1/events | List events with filters |
| GET | /api/v1/events/stream | SSE stream of new events |
Event body schema:
[
{
"agentId": "xxxxxxxx-xxxx-...", // string (UUID) — from POST /api/v1/agents
"eventType": "llm_call", // llm_call | tool_use | action | decision | error | spawn | file_write | file_read | api_call
"payload": {}, // object — any structured data
"metadata": {}, // object — optional metadata (model, tags, etc.)
"estimatedCostUsd": 0.003, // number — optional cost in USD
"durationMs": 1200, // number — optional duration
"tokensIn": 150, // number — optional input tokens
"tokensOut": 800 // number — optional output tokens
}
]#Agent Management
| Method | Path | Description |
|---|---|---|
| GET | /api/v1/agents | List all agents in your fleet |
| POST | /api/v1/agents | Register a new agent |
| GET | /api/v1/agents/:id | Get agent details |
| PUT | /api/v1/agents/:id | Update agent config |
| POST | /api/v1/agents/:id/heartbeat | Send agent heartbeat |
| GET | /api/v1/agents/:id/stats | Agent statistics |
#Traces
Traces group related events into a single workflow. Create a trace when your agent starts a task, add events during execution, and close it when done.
| Method | Path | Description |
|---|---|---|
| POST | /api/v1/traces | Create a new trace |
| GET | /api/v1/traces | List traces (filterable) |
| GET | /api/v1/traces/:id | Get trace details + events |
| GET | /api/v1/traces/active | List active (in-progress) traces |
#Sessions
Sessions group traces and events by logical workflow. The proxy creates sessions automatically from the URL path; the SDK lets you manage them explicitly.
| Method | Path | Description |
|---|---|---|
| GET | /api/v1/sessions | List sessions |
| GET | /api/v1/sessions/:id | Session details with timeline |
#Policy Engine
Define rules that automatically enforce spend limits, rate limits, model restrictions, file access controls, and approval gates across your fleet.
| Method | Path | Description |
|---|---|---|
| GET | /api/v1/policies | List all policies |
| POST | /api/v1/policies | Create a policy |
| PUT | /api/v1/policies/:id | Update a policy |
| DELETE | /api/v1/policies/:id | Delete a policy |
Policy types:
spend_limit— Per-agent or fleet-wide spend caps (USD per hour/day/month)rate_limit— Max requests per time window per agentmodel_allowlist— Restrict which models agents can use (claude-*,gpt-4o)file_access— Block or allow file read/write pathsapproval_gate— Require human approval before sensitive operations
#Task Queue
Distribute work across agents with priority-based routing and capability matching.
| Method | Path | Description |
|---|---|---|
| POST | /api/v1/tasks | Create a task |
| POST | /api/v1/tasks/pull | Pull next task (by agent capability) |
| POST | /api/v1/tasks/:id/complete | Mark task complete |
| POST | /api/v1/tasks/:id/fail | Mark task failed |
| GET | /api/v1/tasks/:id | Get task details |
#Authentication
All API calls require a fleet API key in the Authorization header:
curl https://fluq.ai/api/v1/agents \
-H "Authorization: Bearer fo_YOUR_API_KEY"API keys are scoped to a fleet. Each fleet is isolated — agents, events, and policies in one fleet cannot see or affect another.
Generate keys in the dashboard settings or during onboarding.
#Rate Limits
| Tier | Agents | Events/mo | API requests/min |
|---|---|---|---|
| Free | 10 | 50K | 60 |
| Pro ($29/mo) | 50 | 500K | 300 |
| Team ($99/mo) | Unlimited | Unlimited | 1,000 |
Rate limit headers are included in every response: X-RateLimit-Remaining, X-RateLimit-Reset.