Dobby
One Gateway. Every Provider.

LLM Gateway

Route every LLM call through a single OpenAI-compatible proxy. 13+ providers, built-in security pipeline (Content Shield + DLP + kill-switch), real-time cost tracking. One line of code to set up.

13+ LLM Providers

Claude, GPT, Gemini, Mistral, Llama, Bedrock, DeepSeek, Grok, Perplexity, and more — all through one endpoint.

Built-In Security Pipeline

Every request passes through Content Shield (prompt injection), DLP (PII detection), and policy enforcement.

Real-Time Cost Tracking

Per-agent, per-provider, per-user metering. Budget enforcement with atomic Redis checks. Alerts at spending thresholds.

Kill-Switch Ready

Stop all agent traffic instantly. 4 scopes, 5-second propagation. Built into the gateway pipeline.

Key Capabilities

OpenAI-compatible chat/completions endpoint with streaming
Security pipeline: Content Shield → DLP → policy → LLM call
3-tier key system (user, service, temporary) with scoped permissions
Rate limiting per-key and per-org with Redis sliding window
Circuit breaker with automatic failover between providers
Full audit trail in BigQuery with 365-day retention

How It Works

1

Add one line

Point your OpenAI SDK base_url to Dobby. No other code changes needed.

2

Protected automatically

Every call scanned for injections, PII, and policy violations.

3

Route & execute

Request routed to your configured provider with failover.

4

Track & audit

Cost metered per-agent, response logged, full audit trail.

Ready to get started?

Connect and manage AI agents with llm gateway built in.

LLM Gateway — Dobby AI