Dobby
Back to Academy
DeveloperBeginner

How to Connect CrewAI Agents to a Control Plane (2026 Guide)

Connect your CrewAI crews to Dobby for unified monitoring, cost tracking, and governance. Step-by-step integration with a one-line SDK change.

8 min read Gil KalMar 28, 2026

What you will learn

  • Connect a CrewAI crew to Dobby via the Gateway
  • Track costs and token usage for each crew member
  • Set up approval gates for crew task execution
  • Monitor crew runs in the unified dashboard
  • Register a CrewAI crew as an external agent for scheduling

TL;DR — CrewAI is great at orchestration, but it does not track cost, enforce policies, or produce audit trails. Point its LLM base URL at the Dobby Gateway (one-line change) and every crew member lights up in the Agent Fleet with full cost attribution.

Why Connect CrewAI to a Control Plane?

CrewAI is excellent at orchestrating multi-agent workflows — defining roles, assigning tasks, and managing collaboration between agents. But it was not designed to manage costs, enforce policies, or provide enterprise-grade audit trails.

By routing CrewAI LLM calls through a control plane, you get cost tracking, policy enforcement, and observability without changing how your crews work.

Without Dobby

CrewAI runs autonomously with no cost visibility. A crew of 5 agents burns $200 overnight on GPT-4 calls. You find out when the invoice arrives.

With Dobby

Every LLM call is tracked per crew member. Budget alert fires at $50. The crew is automatically throttled before exceeding the limit.

Integration in 3 Steps

1

Create a Gateway API key at dobby-ai.com → Gateway → API Keys. Use a service key (gk_svc_*) for production crews — it supports 500 requests per minute.

2

Configure your CrewAI agents to use the Dobby Gateway as their LLM endpoint. This is a one-line change — point the base URL to the Gateway instead of OpenAI directly.

3

Run your crew. All LLM calls now flow through the Gateway — costs, tokens, and latency are tracked automatically per agent.

Code Example

python
from crewai import Agent, Task, Crew
from langchain_openai import ChatOpenAI

# Point to Dobby Gateway instead of OpenAI
llm = ChatOpenAI(
    model="gpt-4o",
    base_url="https://dobby-ai.com/api/v1/gateway",
    api_key="gk_svc_your_service_key"  # Gateway key
)

# Define agents with the Gateway-routed LLM
researcher = Agent(
    role="Senior Researcher",
    goal="Find comprehensive information",
    llm=llm,
    verbose=True
)

writer = Agent(
    role="Content Writer",
    goal="Write clear, engaging content",
    llm=llm,
    verbose=True
)

# Create tasks and crew as usual
task = Task(
    description="Research AI agent governance trends",
    agent=researcher,
    expected_output="A summary report"
)

crew = Crew(agents=[researcher, writer], tasks=[task])
result = crew.kickoff()

# Every LLM call is now tracked in Dobby:
# - Cost per agent (researcher vs writer)
# - Total crew run cost
# - Token usage breakdown
# - Full audit trail

Dobby auto-discovers CrewAI agents on their first Gateway call. Each crew member appears in the Agent Fleet dashboard with its own cost tracking, health status, and activity timeline — no manual registration needed.

What You See in the Dashboard

  • Agent Fleet — each crew member listed with status, cost, and last activity
  • Cost by Agent — researcher vs writer cost comparison
  • Live Feed — real-time LLM requests from the crew as they happen
  • Audit Trail — full history of every decision and output

Adding Governance to Crews

Once connected, you can add governance layers without changing your CrewAI code. Set budget limits per crew, require approval before high-cost operations, restrict which models crew members can use, and set up Slack alerts for crew completion or errors.

Scheduling Crews as External Agents

For crews that need to run on a schedule or with approval gates before execution, register the crew as an external agent via the SDK or dashboard. Dobby will trigger it on cron, pass inputs, and log the trigger history.

You can also build CrewAI crews directly in Dobby using the YAML Crew Builder — define agents, tasks, and tools in YAML and let Dobby handle the orchestration, monitoring, and governance automatically.

Frequently Asked Questions

Does this work with CrewAI's embedded LLM support?

Yes. Any LLM client CrewAI supports (ChatOpenAI, ChatAnthropic via compatible shims, etc.) can be pointed at the Gateway base URL. The Gateway speaks the OpenAI API, so CrewAI does not need to know it is talking to Dobby.

Can I track cost per task, not just per agent?

Yes — pass a task_id or tag in the request metadata and the FinOps dashboard will group cost by that dimension. This is how most teams build per-workflow ROI reports.

What about CrewAI Tools / RAG calls?

Route RAG embedding calls through the Gateway too (they use the same OpenAI-compatible embeddings endpoint). Custom tools can be registered as MCP tools or BYOA agents so they appear in the same fleet view.

Ready to try this yourself?

Start free — no credit card required.

Start Free
How to Connect CrewAI Agents to a Control Plane (2026 Guide) — Dobby Academy