Skip to main content
Majordomo is an LLM control plane that sits between your application and your AI providers. This page walks you through the Managed setup: create an account, point your SDK at the Majordomo gateway, and your requests start appearing in the dashboard immediately — no infrastructure to operate.
This guide covers the Managed setup — Majordomo runs the gateway. For self-hosted Steward in your own VPC, see Self-hosted Setup.
1

Create an account and API key

Sign up at app.gomajordomo.com. From the dashboard, navigate to API Keys and create your first key.Your key has the format mdm_sk_.... Store it in your secrets manager — it is shown only once at creation time.
2

Update your SDK configuration

Majordomo acts as a transparent proxy. Change the base URL and add one header. Your existing provider API key passes through unchanged.
import os
from openai import OpenAI

client = OpenAI(
    base_url="https://gateway.gomajordomo.com/v1",
    api_key=os.environ["OPENAI_API_KEY"],
    default_headers={"X-Majordomo-Key": os.environ["MAJORDOMO_API_KEY"]},
)

response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Hello"}],
)
Set the environment variable in your deployment environment:
MAJORDOMO_API_KEY=mdm_sk_your_key_here
The gateway returns responses identically to calling the provider directly — streaming, function calling, and all provider-specific parameters are passed through unchanged.
3

Verify in the dashboard

Open the Majordomo dashboard. Your request appears with model, token counts, cost, and latency as soon as it completes — no polling or additional setup required.From here you can manage API keys, tag requests for cost attribution, run replays against candidate models, and build eval sets from production traffic.

Next steps

Attribute costs by team or feature

Tag requests with custom metadata and break down spend across any dimension.

Test a model switch

Replay production traffic against a candidate model before committing to the change.

Self-hosted Steward

Run Steward in your own VPC. Prompts never leave your infrastructure.

All SDKs and frameworks

Integration examples for every supported SDK, including Pydantic AI and majordomo-llm.