Skip to main content
If you’re reading this, you’re probably in one of two situations: a prospect just sent you a 40-question security questionnaire with “Where does your AI data go?” as question one, or your security team flagged your current LLM observability vendor as a risk. Either way, you need answers you can stand behind. This page gives you the technical facts and the copy you need.

The core guarantee

When you run Steward in your own infrastructure:
  • Prompt content never leaves your environment. Steward runs inside your VPC, processes your LLM requests locally, and writes full request/response bodies to your own S3 or GCS bucket.
  • Majordomo’s servers never receive prompt content. The only data that flows outbound to Majordomo is request metadata: model name, token counts, cost, latency, and any custom tags you configure. No inputs, no outputs, no conversation history.
  • You own the storage. Bodies go to a bucket in your AWS account or GCP project. You control the encryption keys, the retention policy, and who has access.
This is not a policy commitment or a contractual clause. It is the technical architecture. There is no pathway for prompt content to reach Majordomo’s infrastructure, because Steward never sends it there.

Data flow diagram

Your Application


Majordomo Steward          ← runs in your VPC
      │         │
      │         └─── Full request/response body ──► Your S3 / GCS bucket
      │                                               (never leaves your account)

LLM Provider API           ← OpenAI, Anthropic, Gemini, etc.
(existing path, unchanged)



Metadata only              ── tokens, cost, latency, tags ──► Majordomo Cloud
(no content)
What “metadata only” means in practice:
FieldSent to Majordomo?
Model name (e.g., gpt-4o)Yes
Input token countYes
Output token countYes
CostYes
Latency (ms)Yes
Custom tags (X-Majordomo-Feature, X-Majordomo-Team, etc.)Yes
Prompt textNo
System promptNo
Response textNo
Conversation historyNo
User-identifiable contentNo (unless you add it as a metadata tag)

Security questionnaire answers

These answers apply to customers running self-hosted Steward in their own VPC. If you are using Managed (Majordomo-hosted) deployment, your requests are processed on Majordomo’s infrastructure — these answers do not apply to your setup. Contact security@gomajordomo.com for managed deployment security documentation.
The answers below are written from your perspective, for use in your own vendor questionnaires. Adapt them to match your specific infrastructure and policies before submitting.
Where is your AI/LLM data processed? In our own infrastructure. We run an open-source LLM gateway (Majordomo Steward) inside our VPC. All LLM requests are processed locally; the gateway proxies requests directly to provider APIs (OpenAI, Anthropic, etc.) without routing through any third-party intermediary.
Where is conversation data stored? In our own cloud storage. Full request and response bodies are written to an S3 bucket in our AWS account (or GCS bucket in our GCP project). We control the bucket, the encryption keys, and the retention policy.
Does your LLM observability vendor receive prompt data? No. We use Majordomo for usage metrics and cost tracking. Steward runs in our infrastructure and transmits only metadata to Majordomo’s servers: model names, token counts, costs, and latency. Prompt content and response content never leave our environment.
What data does your AI vendor have access to? Majordomo has access to request metadata only: which models we call, how many tokens we use, and what it costs. No prompt content, no response content, no user data.
Is PII processed by third-party AI infrastructure vendors? No PII is transmitted to Majordomo. Steward runs inside our own VPC and sends only non-content metadata outbound. Any PII that might appear in prompts is processed locally and written to storage we control. It never reaches Majordomo’s servers.
How is AI usage data encrypted in transit? Metadata transmitted to Majordomo is sent over TLS 1.2+. Request/response bodies stored in our S3/GCS bucket use AES-256 encryption at rest with our own CMK.
Can you provide a data flow diagram? Yes — see the diagram in the architecture section of our security documentation. The short version: user requests → our gateway (our VPC) → provider API. Bodies → our S3. Metadata only → Majordomo.
Does Majordomo have a SOC 2 report? Contact security@gomajordomo.com for current compliance documentation.

What to share with your security team

If your security team needs to review the architecture, point them here:
  1. This page — the technical architecture and data flow
  2. How It Works — the full technical explainer
  3. GitHub — Steward is open source; they can read the code
Steward is open source. There is no hidden telemetry, no phone-home behavior, and no dependency on Majordomo’s servers for the proxying function. If Majordomo’s cloud goes down, your Steward keeps proxying and logging locally.

Deployment

See Steward Setup for a complete walkthrough of deploying Steward in your VPC with Docker, Postgres, and optional S3/GCS body storage.

Body storage configuration

Body storage is configured in the dashboard (Settings → Cloud Body Storage), not in Steward config. Connect your S3 or GCS bucket once, and Steward will write gzipped request/response bodies to it automatically. Majordomo’s database contains only metadata — token counts, cost, latency, model name, and your custom tags. See Cloud Body Storage for setup instructions.

Checklist for enterprise reviews

Before a vendor security review, confirm:
  • Steward is deployed inside your VPC (not using Managed deployment)
  • Body storage is configured to your own S3/GCS bucket (or disabled if you don’t need it)
  • No X-Majordomo-User-Id or similar tags contain PII — use opaque identifiers
  • Network egress from Steward is restricted to: LLM provider endpoints, your S3/GCS bucket, Majordomo metadata ingest endpoint
  • Postgres is not publicly accessible
  • You have a documented retention policy for the llm_requests table and your body storage bucket