Route every LLM call through a single OpenAI-compatible proxy. 13+ providers, built-in security pipeline (Content Shield + DLP + kill-switch), real-time cost tracking. One line of code to set up.
Claude, GPT, Gemini, Mistral, Llama, Bedrock, DeepSeek, Grok, Perplexity, and more — all through one endpoint.
Every request passes through Content Shield (prompt injection), DLP (PII detection), and policy enforcement.
Per-agent, per-provider, per-user metering. Budget enforcement with atomic Redis checks. Alerts at spending thresholds.
Stop all agent traffic instantly. 4 scopes, 5-second propagation. Built into the gateway pipeline.
Point your OpenAI SDK base_url to Dobby. No other code changes needed.
Every call scanned for injections, PII, and policy violations.
Request routed to your configured provider with failover.
Cost metered per-agent, response logged, full audit trail.