Skip to content

docs(rfd): Сustom LLM endpoints#648

Open
xtmq wants to merge 12 commits intoagentclientprotocol:mainfrom
xtmq:evgeniy.stepanov/rfd-custom-url
Open

docs(rfd): Сustom LLM endpoints#648
xtmq wants to merge 12 commits intoagentclientprotocol:mainfrom
xtmq:evgeniy.stepanov/rfd-custom-url

Conversation

@xtmq
Copy link

@xtmq xtmq commented Mar 4, 2026


title: "Configurable LLM Providers"

Elevator pitch

What are you proposing to change?

Add the ability for clients to discover and configure agent LLM providers (identified by id) via dedicated provider methods:

  • providers/list
  • providers/set
  • providers/remove

This allows clients to route LLM requests through their own infrastructure (proxies, gateways, or self-hosted models) without agents needing to know about this configuration in advance.

Status quo

How do things work today and what problems does this cause? Why would we change things?

ACP does not currently define a standard method for configuring LLM providers.

In practice, provider configuration is usually done via environment variables or agent-specific config files. That creates several problems:

  • No standard way for clients to discover what providers an agent exposes
  • No standard way to update one specific provider by id
  • No standard way to remove provider configuration at runtime
  • Secret-bearing values in headers are difficult to handle safely when configuration must be round-tripped

This particularly affects:

  • Client proxies: clients want to route agent traffic through their own proxies, for example to add headers or logging
  • Enterprise deployments: organizations want to route LLM traffic through internal gateways for compliance, logging, and cost controls
  • Self-hosted models: users running local servers (vLLM, Ollama, etc.) need to redirect agent traffic to local infrastructure
  • API gateways: organizations using multi-provider routing, rate limiting, and caching need standardized endpoint configuration

Shiny future

How will things play out once this feature exists?

Clients will be able to:

  1. Understand whether an agent supports client-managed LLM routing
  2. See where the agent is currently sending LLM requests (for example in settings UI)
  3. Route agent LLM traffic through their own infrastructure (enterprise proxy, gateway, self-hosted stack)
  4. Update routing settings from the client instead of relying on agent-specific env vars
  5. Remove custom routing when needed and return to agent defaults
  6. Apply these settings before starting new work in sessions

Implementation details and plan

Tell me more about your implementation. What is your detailed implementation plan?

Intended flow

sequenceDiagram
    participant Client
    participant Agent

    Client->>Agent: initialize
    Agent-->>Client: initialize response (agentCapabilities.providers = true)

    Client->>Agent: providers/list
    Agent-->>Client: providers/list response

    Client->>Agent: providers/set (id = "main")
    Agent-->>Client: providers/set response

    Client->>Agent: providers/remove (optional)
    Agent-->>Client: providers/remove response

    Client->>Agent: session/new
Loading
  1. Client initializes and checks agentCapabilities.providers.
  2. Client calls providers/list to discover available providers, their current routing targets, and supported protocol types.
  3. Client calls providers/set to apply new (required) configuration for a specific provider id.
  4. Client may call providers/remove when a provider configuration should be removed.
  5. Client creates or loads sessions.

Capability advertisement

The agent advertises support with a single boolean capability:

interface AgentCapabilities {
  // ... existing fields ...

  /**
   * Provider configuration support.
   * If true, the agent supports providers/list, providers/set, and providers/remove.
   */
  providers?: boolean;
}

If providers is absent or false, clients must treat provider methods as unsupported.

Types

/** Well-known API protocol identifiers. */
type LlmProtocol = "anthropic" | "openai" | "azure" | "vertex" | "bedrock";

interface ProviderInfo {
  /** Provider identifier, for example "main" or "openai". */
  id: string;

  /** Protocol currently used by this provider. */
  apiType: LlmProtocol;

  /** Base URL currently used by this provider. */
  baseUrl: string;

  /** Supported protocol types for this provider. */
  supported: LlmProtocol[];

  /** Extension metadata */
  _meta?: Record<string, unknown>;
}

providers/list

interface ProvidersListRequest {
  /** Extension metadata */
  _meta?: Record<string, unknown>;
}

interface ProvidersListResponse {
  /** Configurable providers with current routing info suitable for UI display. */
  providers: ProviderInfo[];

  /** Extension metadata */
  _meta?: Record<string, unknown>;
}

providers/set

providers/set updates the full configuration for one provider id.

interface ProvidersSetRequest {
  /** Provider id to configure. */
  id: string;

  /** Protocol type for this provider. */
  apiType: LlmProtocol;

  /** Base URL for requests sent through this provider. */
  baseUrl: string;

  /**
   * Full headers map for this provider.
   * May include authorization, routing, or other integration-specific headers.
   */
  headers: Record<string, string>;

  /** Extension metadata */
  _meta?: Record<string, unknown>;
}

interface ProvidersSetResponse {
  /** Extension metadata */
  _meta?: Record<string, unknown>;
}

providers/remove

interface ProvidersRemoveRequest {
  /** Provider id to remove. */
  id: string;

  /** Extension metadata */
  _meta?: Record<string, unknown>;
}

interface ProvidersRemoveResponse {
  /** Extension metadata */
  _meta?: Record<string, unknown>;
}

Example exchange

initialize Response:

{
  "jsonrpc": "2.0",
  "id": 0,
  "result": {
    "protocolVersion": 1,
    "agentInfo": {
      "name": "MyAgent",
      "version": "2.0.0"
    },
    "agentCapabilities": {
      "providers": true,
      "sessionCapabilities": {}
    }
  }
}

providers/list Request:

{
  "jsonrpc": "2.0",
  "id": 1,
  "method": "providers/list",
  "params": {}
}

providers/list Response:

{
  "jsonrpc": "2.0",
  "id": 1,
  "result": {
    "providers": [
      {
        "id": "main",
        "apiType": "anthropic",
        "baseUrl": "http://localhost/anthropic",
        "supported": ["bedrock", "vertex", "azure", "anthropic"]
      },
      {
        "id": "openai",
        "apiType": "openai",
        "baseUrl": "http://localhost/openai",
        "supported": ["openai"]
      }
    ]
  }
}

providers/set Request:

{
  "jsonrpc": "2.0",
  "id": 2,
  "method": "providers/set",
  "params": {
    "id": "main",
    "apiType": "anthropic",
    "baseUrl": "https://llm-gateway.corp.example.com/anthropic/v1",
    "headers": {
      "X-Request-Source": "my-ide"
    }
  }
}

providers/set Response:

{
  "jsonrpc": "2.0",
  "id": 2,
  "result": {}
}

providers/remove Request:

{
  "jsonrpc": "2.0",
  "id": 3,
  "method": "providers/remove",
  "params": {
    "id": "openai"
  }
}

providers/remove Response:

{
  "jsonrpc": "2.0",
  "id": 3,
  "result": {}
}

Behavior

  1. Capability discovery: agents that support provider methods MUST advertise agentCapabilities.providers: true in initialize. Clients SHOULD only call providers/* when this capability is present and true.
  2. Timing and session impact: provider methods MUST be called after initialize. Clients SHOULD configure providers before creating or loading sessions. Agents MAY choose not to apply changes to already running sessions, but SHOULD apply them to sessions created or loaded after the change.
  3. List semantics: providers/list returns configurable providers, their current routing (apiType, baseUrl), and supported protocol types.
  4. Set semantics and validation: providers/set replaces the full configuration for the target id (apiType, baseUrl, full headers). If id is unknown, apiType is unsupported for that provider, or params are malformed, agents SHOULD return invalid_params.
  5. Remove semantics: providers/remove removes runtime configuration for the target id. Removing an unknown id SHOULD be treated as success (idempotent behavior).
  6. Scope and persistence: provider configuration is process-scoped and SHOULD NOT be persisted to disk.

Frequently asked questions

What questions have arisen over the course of authoring this document?

Why not a single providers/update method for full list replacement?

A full-list update means the client must send complete configuration (including headers) for all providers every time.

If the client wants to change only one provider, it may not know headers for the others. In that case it cannot safely build a correct full-list payload.

Also, providers/list does not return headers, so the client cannot simply "take what the agent returned" and send it back with one edit.

Per-provider methods (set and remove) avoid this problem and keep updates explicit.

Why doesn't providers/list return headers?

Header values may contain secrets and should not be echoed by the agent. providers/list is intentionally limited to non-secret routing information (apiType, baseUrl).

Why is this separate from initialize params?

Clients need capability discovery first, then provider discovery, then configuration. A dedicated method family keeps initialization focused on negotiation and leaves provider mutation to explicit steps.

Revision history

  • 2026-03-21: Initial draft of provider configuration API (providers/list, providers/set, providers/remove)
  • 2026-03-07: Rename "provider" to "protocol" to reflect API compatibility level; make LlmProtocol an open string type with well-known values; resolve open questions on identifier standardization and model availability
  • 2026-03-04: Revised to use dedicated setLlmEndpoints method with capability advertisement
  • 2026-02-02: Initial draft - preliminary proposal to start discussion

@xtmq xtmq requested a review from a team as a code owner March 4, 2026 20:55
Copy link
Member

@benbrandt benbrandt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall I really like this! Some questions but no major objections

@cdxiaodong
Copy link

nice sir!!!

@IceyLiu
Copy link
Contributor

IceyLiu commented Mar 6, 2026

good PR, we do need it

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants