-
Notifications
You must be signed in to change notification settings - Fork 3
Description
Design: Dynamic Multi-Tenant Plugin Management
Problem
The plugin framework currently loads plugins from a static config.yaml file into a singleton PluginManager at startup. This creates two limitations:
-
No dynamic configuration. Adding, removing, or updating plugins requires restarting the application. There's no way to modify plugin configuration at runtime.
-
No multi-tenancy. All users share the same set of plugins with the same configuration. ContextForge supports the concept of teams, and different teams need different plugin configurations — different scanning thresholds, different compliance plugins, different custom integrations.
Goals
- Support database-backed plugin configuration alongside the existing config.yaml
- Support multi-tenancy at global and team levels
- Allow dynamic plugin lifecycle (add/remove/update without restart)
- Keep the PluginManager itself simple — it doesn't need to know about tenants or databases
- Maintain backward compatibility for single-tenant, YAML-based deployments
Design Overview
┌──────────────────────────────────────────────────────────┐
│ Config Sources (host application) │
│ │
│ ┌──────────┐ ┌──────────────┐ ┌──────────────────┐ │
│ │ YAML │ │ Database │ │ API / other │ │
│ │ (static) │ │ (dynamic) │ │ (future) │ │
│ └────┬─────┘ └──────┬───────┘ └────────┬─────────┘ │
│ │ │ │ │
│ └───────────────┼────────────────────┘ │
│ │ │
│ ▼ │
│ Config (pydantic object) │
└──────────────────────┬───────────────────────────────────┘
│
▼
┌──────────────────────────────────────────────────────────┐
│ Plugin Framework (cpex/framework/) │
│ │
│ ┌────────────────────────────────────────────────┐ │
│ │ TenantPluginManager (cpex/framework/tenant.py)│ │
│ │ │ │
│ │ Resolves: global config + team config │ │
│ │ Creates: PluginManager per tenant │ │
│ │ Caches: managers by tenant ID │ │
│ │ Reloads: on config change (atomic swap) │ │
│ │ Override: load_tenant_config() for custom │ │
│ │ data sources │ │
│ │ │ │
│ │ ┌──────────────┐ ┌──────────────┐ │ │
│ │ │PluginManager │ │PluginManager │ ... │ │
│ │ │ (global) │ │ (team-alpha) │ │ │
│ │ └──────────────┘ └──────────────┘ │ │
│ └────────────────────────────────────────────────┘ │
│ │
│ ┌────────────────────────────────────────────────┐ │
│ │ PluginManager (cpex/framework/manager.py) │ │
│ │ │ │
│ │ Receives: Config object │ │
│ │ Manages: registry, loader, executor │ │
│ │ Knows: nothing about tenants or databases │ │
│ └────────────────────────────────────────────────┘ │
└──────────────────────────────────────────────────────────┘
What Lives Where
| Component | Location | Responsibility |
|---|---|---|
PluginManager |
cpex/framework/manager.py |
Manages a set of plugins from a Config. No tenant awareness. |
TenantPluginManager |
cpex/framework/tenant.py |
Config merging, per-tenant manager lifecycle, atomic swap reload. Part of the framework. |
load_global_config() |
Override point on TenantPluginManager |
Default returns empty config. Override to load global config from database, API, etc. |
load_tenant_config() |
Override point on TenantPluginManager |
Default returns empty config. Override to load tenant config from database, API, etc. |
| Database schema, migrations, CRUD APIs | Host application (e.g., ContextForge) | Data source is not the framework's concern. |
| Change notification (polling, events) | Host application | Triggers reload_tenant() / reload_global() on the framework. |
The framework provides the orchestration (merging, lifecycle, routing). The host application provides the data source. The boundary is load_global_config() and load_tenant_config() — the framework calls them; the host application implements them.
Detailed Design
1. Config as Input (not loaded internally)
Current: PluginManager loads config.yaml internally via ConfigLoader.
Proposed: PluginManager receives a Config pydantic object. It doesn't know or care where the config came from.
class PluginManager:
def __init__(
self,
config: Config, # ← passed in, not loaded
timeout: int = DEFAULT_PLUGIN_TIMEOUT,
observability: Optional[ObservabilityProvider] = None,
hook_policies: Optional[dict[str, HookPayloadPolicy]] = None,
singleton: bool = False, # ← opt-in singleton
) -> None:The config: str (file path) parameter becomes a convenience that delegates to ConfigLoader, preserved for backward compatibility:
@classmethod
def from_yaml(cls, path: str, **kwargs) -> "PluginManager":
"""Create a PluginManager from a YAML config file."""
config = ConfigLoader.load_config(path)
return cls(config=config, **kwargs)Why this way: The framework shouldn't know about databases. The host application (ContextForge) handles database queries, schema, migrations. The framework receives a clean Config object. This keeps the framework dependency-free and testable.
2. Singleton as Opt-In
Current: Borg singleton pattern — all PluginManager instances share state via __shared_state. This prevents having multiple independent managers.
Proposed: Singleton becomes opt-in. Default behavior is a normal instance.
class PluginManager:
_shared_state: ClassVar[dict[str, Any]] = {}
_shared_lock: ClassVar[threading.Lock] = threading.Lock()
def __init__(self, config: Config, singleton: bool = False, **kwargs):
if singleton:
self.__dict__ = self._shared_state
# ... existing Borg pattern logic
else:
# Normal instance — independent state
self._config = config
self._registry = PluginInstanceRegistry()
self._loader = PluginLoader()
self._executor = PluginExecutor(...)
self._initialized = FalseBackward compatibility: Existing code that relies on the singleton pattern continues to work by passing singleton=True. The get_plugin_manager() function in __init__.py would pass singleton=True to preserve current behavior.
3. TenantPluginManager (cpex/framework/tenant.py)
A new framework component that manages multiple PluginManager instances, one per tenant. Lives alongside PluginManager in the framework and is exported from cpex/framework/__init__.py:
class TenantPluginManager:
"""Manages per-tenant PluginManager instances with config merging."""
def __init__(
self,
timeout: int = DEFAULT_PLUGIN_TIMEOUT,
observability: Optional[ObservabilityProvider] = None,
hook_policies: Optional[dict[str, HookPayloadPolicy]] = None,
) -> None:
self._global_config: Optional[Config] = None
self._timeout = timeout
self._observability = observability
self._hook_policies = hook_policies
self._managers: dict[str, PluginManager] = {} # tenant_id → manager
self._global_manager: Optional[PluginManager] = None
self._lock = asyncio.Lock()
async def initialize(self) -> None:
"""Load global config and initialize the global manager."""
self._global_config = await self.load_global_config()
self._global_manager = PluginManager(
config=self._global_config,
timeout=self._timeout,
observability=self._observability,
hook_policies=self._hook_policies,
)
await self._global_manager.initialize()
async def load_global_config(self) -> Config:
"""Load the global plugin configuration.
Override this to load from a database, API, or other source.
Default returns an empty config.
"""
return Config()
async def get_manager(self, tenant_id: Optional[str] = None) -> PluginManager:
"""Get the PluginManager for a tenant. Returns global manager if no tenant."""
if tenant_id is None:
return self._global_manager
if tenant_id in self._managers:
return self._managers[tenant_id]
async with self._lock:
# Double-check after acquiring lock
if tenant_id not in self._managers:
manager = await self._create_tenant_manager(tenant_id)
self._managers[tenant_id] = manager
return self._managers[tenant_id]
async def _create_tenant_manager(self, tenant_id: str) -> PluginManager:
"""Create a PluginManager with merged global + tenant config."""
tenant_config = await self._resolve_tenant_config(tenant_id)
manager = PluginManager(
config=tenant_config,
timeout=self._timeout,
observability=self._observability,
hook_policies=self._hook_policies,
)
await manager.initialize()
return manager4. Config Resolution (Merging Global + Team)
When a tenant-specific manager is needed, the TenantPluginManager merges configurations:
async def _resolve_tenant_config(self, tenant_id: str) -> Config:
"""Merge global config with tenant-specific config.
Resolution rules:
- Global plugins are included unless explicitly overridden by tenant
- Tenant plugins are added on top of global plugins
- If a tenant plugin has the same name as a global plugin, the tenant
version takes precedence (override)
- Tenant can disable a global plugin by overriding with mode: disabled
- Tenant non-plugin settings (server_settings, plugin_dirs, etc.)
override global settings when present
"""
tenant_overrides = await self.load_tenant_config(tenant_id)
# Start with global plugins
merged_plugins: dict[str, PluginConfig] = {
p.name: p for p in (self._global_config.plugins or [])
}
# Apply tenant overrides
for plugin in (tenant_overrides.plugins or []):
merged_plugins[plugin.name] = plugin
# Merge plugin_dirs (global + tenant, deduplicated)
merged_dirs = list(self._global_config.plugin_dirs or [])
for d in (tenant_overrides.plugin_dirs or []):
if d not in merged_dirs:
merged_dirs.append(d)
# Non-plugin settings: tenant overrides global when present
return Config(
plugins=list(merged_plugins.values()),
plugin_dirs=merged_dirs,
server_settings=(
tenant_overrides.server_settings
or self._global_config.server_settings
),
grpc_server_settings=(
tenant_overrides.grpc_server_settings
or self._global_config.grpc_server_settings
),
unix_socket_server_settings=(
tenant_overrides.unix_socket_server_settings
or self._global_config.unix_socket_server_settings
),
)
async def load_tenant_config(self, tenant_id: str) -> Config:
"""Load tenant-specific config.
Override this method to provide database-backed or API-backed
tenant configurations. The default returns an empty config
(no tenant overrides).
"""
return Config()Both load_global_config() and load_tenant_config() are public extension points — override them to load from any data source. Examples:
# Database-backed (ContextForge pattern)
class DatabaseTenantPluginManager(TenantPluginManager):
def __init__(self, session_factory, **kwargs):
super().__init__(**kwargs)
self._session_factory = session_factory
async def load_global_config(self) -> Config:
return self._load_config_from_db(team_id=None)
async def load_tenant_config(self, tenant_id: str) -> Config:
return self._load_config_from_db(team_id=tenant_id)
def _load_config_from_db(self, team_id: str | None) -> Config:
# Query plugin_manager_configs + plugin_configs for this team_id
...
# YAML-backed (backward compatible, single-tenant)
class YamlPluginManager(TenantPluginManager):
def __init__(self, yaml_path: str, **kwargs):
super().__init__(**kwargs)
self._yaml_path = yaml_path
async def load_global_config(self) -> Config:
return ConfigLoader.load_config(self._yaml_path)5. Config Change and Reload
When plugin config changes (database update, YAML reload), the affected manager needs to be recreated:
class TenantPluginManager:
async def reload_tenant(self, tenant_id: str) -> None:
"""Reload a specific tenant's plugin configuration.
Uses atomic swap: create new manager first, swap the reference.
Old manager shutdown is deferred to a background task with a
grace period, allowing in-flight requests to complete.
"""
# Create new manager before acquiring lock (may be slow)
new_manager = await self._create_tenant_manager(tenant_id)
async with self._lock:
old_manager = self._managers.get(tenant_id)
# Atomic swap — new requests go to new manager immediately
self._managers[tenant_id] = new_manager
# Deferred shutdown — runs in background, caller returns immediately
if old_manager:
asyncio.create_task(self._deferred_shutdown(old_manager))
async def reload_global(self) -> None:
"""Reload global config and recreate all tenant managers.
Calls load_global_config() to get the current global config,
then uses atomic swap per manager. Old managers are shut down
in the background after a grace period.
"""
self._global_config = await self.load_global_config()
# Create new global manager
new_global = PluginManager(
config=self._global_config,
timeout=self._timeout,
observability=self._observability,
hook_policies=self._hook_policies,
)
await new_global.initialize()
# Create new tenant managers for all currently active tenants
new_managers: dict[str, PluginManager] = {}
for tenant_id in list(self._managers.keys()):
new_managers[tenant_id] = await self._create_tenant_manager(tenant_id)
# Atomic swap — collect old managers, install new ones
async with self._lock:
old_global = self._global_manager
old_managers = dict(self._managers)
self._global_manager = new_global
self._managers = new_managers
# Deferred shutdown of all old managers in background
if old_global:
asyncio.create_task(self._deferred_shutdown(old_global))
for manager in old_managers.values():
asyncio.create_task(self._deferred_shutdown(manager))
async def _deferred_shutdown(
self,
manager: PluginManager,
grace_seconds: float = 30.0,
) -> None:
"""Wait for in-flight requests to drain, then shutdown.
Runs as a background task so the caller (reload_tenant,
reload_global) returns immediately after the atomic swap.
The grace period gives in-flight requests time to complete
before plugin connections are closed.
"""
await asyncio.sleep(grace_seconds)
await manager.shutdown()
async def shutdown(self) -> None:
"""Shutdown all managers immediately (application exit)."""
async with self._lock:
managers_to_shutdown = list(self._managers.values())
global_to_shutdown = self._global_manager
self._managers.clear()
self._global_manager = None
for manager in managers_to_shutdown:
await manager.shutdown()
if global_to_shutdown:
await global_to_shutdown.shutdown()Trade-off: recreate vs. hot-reload. This design recreates the entire PluginManager for a tenant on config change. An alternative is hot-reloading individual plugins within a running manager (register/unregister). Recreation is simpler, safer (no partial state), and acceptable if config changes are infrequent. Hot-reload could be added later if needed.
Atomic swap + deferred shutdown. New managers are created before acquiring the lock. The lock is held only for the reference swap — which is instant. Old managers are shut down in background tasks after a configurable grace period (default 30s). This means:
- The caller returns immediately after the swap
- In-flight requests on the old manager have 30s to complete
- New requests go to the new manager immediately
- Plugin connections are cleaned up after the grace period
6. Service Integration: Get Manager, Call Directly
The TenantPluginManager does not wrap invoke_hook. Services get the PluginManager for their tenant and call it directly. This is critical for local context tables — the context_table returned by a pre-hook must be passed back to the post-hook on the same PluginManager instance.
class TenantPluginManager:
"""Public API — lookup and lifecycle only, not hook invocation."""
async def get_manager(self, tenant_id: str | None = None) -> PluginManager:
"""Get the PluginManager for a tenant. Returns global if no tenant."""
...
def has_hooks_for(self, hook_type: str, tenant_id: str | None = None) -> bool:
"""Check if hooks exist for a given type and tenant."""
if tenant_id and tenant_id in self._managers:
return self._managers[tenant_id].has_hooks_for(hook_type)
if self._global_manager:
return self._global_manager.has_hooks_for(hook_type)
return False
async def initialize(self) -> None: ...
async def reload_tenant(self, tenant_id: str) -> None: ...
async def reload_global(self) -> None: ...
async def shutdown(self) -> None: ...Why no invoke_hook on TenantPluginManager:
The PluginManager's invoke_hook returns a (PluginResult, context_table) tuple. The context_table carries per-plugin local state from pre-hook to post-hook. If the TenantPluginManager wrapped invoke_hook, two problems arise:
-
Context/tenant mismatch. If a service passed a different
tenant_idon the post-hook call (bug, or tenant resolved differently mid-request), the context_table would go to the wrong manager's plugins. Subtle, hard to debug. -
Reload between pre and post. If
get_manager(tenant_id)is called on eachinvoke_hook, a reload between pre and post means the context_table was created by the old manager but delivered to the new manager.
Both problems are eliminated by having the service pin the manager reference once per request:
# Service pattern — get manager once, use it for the full pre/post flow
manager = await self._tenant_plugin_manager.get_manager(tenant_id)
pre_result, context_table = await manager.invoke_hook(
ToolHookType.TOOL_PRE_INVOKE, payload, global_context)
# ... tool executes ...
post_result, _ = await manager.invoke_hook(
ToolHookType.TOOL_POST_INVOKE, post_payload, global_context,
local_contexts=context_table)The pinned reference ensures:
- The same plugin instances handle both pre and post
- The context_table is always paired with the manager that created it
- If an atomic swap reload happens mid-request, the old manager stays alive until the pinned reference is released (drain pattern)
### 7. Database Schema (Host Application Concern)
The framework defines the `Config` and `PluginConfig` pydantic models. The host application (ContextForge) maps these to database tables. Two tables are needed: a **config table** representing the non-plugin portion of `Config` (server settings, plugin dirs) and a **plugin config table** for individual plugin definitions. Both tables use `team_id` for scoping — `NULL` represents the global config.
```sql
-- Represents the Config pydantic model (non-plugin settings)
-- team_id = NULL is the global config
CREATE TABLE plugin_manager_configs (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
team_id VARCHAR UNIQUE, -- NULL = global, else team-scoped
description TEXT,
plugin_dirs JSONB DEFAULT '[]',
server_settings JSONB, -- MCPServerConfig
grpc_server_settings JSONB, -- GRPCServerConfig
unix_socket_server_settings JSONB, -- UnixSocketServerConfig
enabled BOOLEAN DEFAULT true,
created_at TIMESTAMPTZ DEFAULT now(),
updated_at TIMESTAMPTZ DEFAULT now()
);
-- Global config is just a row with team_id = NULL
INSERT INTO plugin_manager_configs (team_id, description)
VALUES (NULL, 'Global plugin configuration');
-- Represents individual PluginConfig entries, linked to a config
CREATE TABLE plugin_configs (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
config_id UUID NOT NULL REFERENCES plugin_manager_configs(id) ON DELETE CASCADE,
name VARCHAR NOT NULL,
description TEXT,
author VARCHAR,
kind VARCHAR NOT NULL,
version VARCHAR,
hooks JSONB DEFAULT '[]',
tags JSONB DEFAULT '[]',
mode VARCHAR DEFAULT 'sequential',
on_error VARCHAR DEFAULT 'fail',
priority INTEGER DEFAULT 100,
conditions JSONB DEFAULT '[]',
capabilities JSONB DEFAULT '[]',
config JSONB, -- Plugin-specific config
mcp JSONB, -- MCP transport config
grpc JSONB, -- gRPC transport config
unix_socket JSONB, -- Unix socket config
enabled BOOLEAN DEFAULT true,
created_at TIMESTAMPTZ DEFAULT now(),
updated_at TIMESTAMPTZ DEFAULT now(),
UNIQUE (config_id, name) -- Unique plugin name within a config
);
CREATE INDEX idx_plugin_configs_config ON plugin_configs(config_id);
CREATE INDEX idx_plugin_manager_configs_team ON plugin_manager_configs(team_id);
Why two tables: The Config pydantic model has two concerns — non-plugin settings (server_settings, plugin_dirs) and a list of plugin definitions. These map naturally to a parent/child table relationship. The plugin_manager_configs table represents the Config object itself (with team scoping), and plugin_configs holds the individual PluginConfig entries that belong to it.
Global config representation: Global config is simply a row in plugin_manager_configs with team_id = NULL. No special IDs or magic values. Queries that resolve tenant config load both the global row and the tenant-specific row, then merge.
The host application provides conversion functions:
def row_to_plugin_config(row) -> PluginConfig:
return PluginConfig(
name=row.name,
description=row.description,
kind=row.kind,
version=row.version,
hooks=row.hooks or [],
tags=row.tags or [],
mode=PluginMode(row.mode) if row.mode else PluginMode.SEQUENTIAL,
on_error=OnError(row.on_error) if row.on_error else OnError.FAIL,
priority=row.priority or 100,
conditions=[PluginCondition(**c) for c in (row.conditions or [])],
capabilities=frozenset(row.capabilities or []),
config=row.config,
mcp=MCPClientConfig(**row.mcp) if row.mcp else None,
grpc=GRPCClientConfig(**row.grpc) if row.grpc else None,
unix_socket=UnixSocketClientConfig(**row.unix_socket) if row.unix_socket else None,
)
def rows_to_config(config_row, plugin_rows) -> Config:
return Config(
plugins=[row_to_plugin_config(r) for r in plugin_rows if r.enabled],
plugin_dirs=config_row.plugin_dirs or [],
server_settings=(
MCPServerConfig(**config_row.server_settings)
if config_row.server_settings else None
),
grpc_server_settings=(
GRPCServerConfig(**config_row.grpc_server_settings)
if config_row.grpc_server_settings else None
),
unix_socket_server_settings=(
UnixSocketServerConfig(**config_row.unix_socket_server_settings)
if config_row.unix_socket_server_settings else None
),
)Config Resolution Examples
Example 1: YAML-Only (Backward Compatible)
# Single-tenant, YAML-based — no database needed
manager = YamlPluginManager(yaml_path="plugins/config.yaml")
await manager.initialize() # calls load_global_config() → reads YAML
# All tenants get the same plugins
result = await manager.invoke_hook("tool_pre_invoke", payload)Example 2: Database-Only
# Multi-tenant, database-backed — no YAML needed
manager = DatabaseTenantPluginManager(session_factory=get_db_session)
await manager.initialize() # calls load_global_config() → queries DB for team_id=NULL
plugin_manager = await manager.get_manager("team-finance")
# Routes to team-specific manager (lazy-created on first request)
result, context_table = await plugin_manager.invoke_hook("tool_pre_invoke", payload)Example 3: Team Override
Global plugins:
- injection_scanner (priority 10)
- audit_logger (priority 100)
Team "finance" plugins (from database):
- pii_scanner (priority 20) ← added
- audit_logger (priority 50, mode: audit) ← overridden (higher priority, audit mode)
Resolved config for team "finance":
- injection_scanner (priority 10) ← from global
- pii_scanner (priority 20) ← from team
- audit_logger (priority 50, audit) ← team override replaces global version
Example 4: Team Disabling a Global Plugin
Global plugins:
- injection_scanner
- audit_logger
Team "development" plugins (from database):
- injection_scanner (mode: disabled) ← disables the global plugin for this team
Resolved config for team "development":
- injection_scanner (disabled) ← present but won't execute
- audit_logger ← from global, unchanged
Migration Path
Phase 1: Extract Config Input
- Modify PluginManager to accept
Configobject directly - Add
from_yaml()class method for backward compatibility - Make singleton opt-in via
singletonparameter - No behavioral change for existing users
Phase 2: Add TenantPluginManager
- Add
TenantPluginManagertocpex/framework/tenant.py - Export from
cpex/framework/__init__.py - Two override points:
load_global_config()andload_tenant_config() - Both default to returning empty config
- Add
YamlPluginManagerconvenience subclass for backward compatibility - No database dependency in the framework
Phase 3: Database Integration (Host Application)
- ContextForge adds
plugin_manager_configs+plugin_configstables and migration - ContextForge subclasses TenantPluginManager with database-backed
load_global_config()andload_tenant_config() - ContextForge adds API endpoints for CRUD on plugin configs
- Config change events trigger
reload_tenant()orreload_global()
Design Decisions
-
Atomic swap + deferred shutdown for reload. New managers are created before acquiring the lock. The lock is held only for the reference swap. Old managers are shut down in background tasks after a 30s grace period, giving in-flight requests time to complete. The caller returns immediately.
-
Independent plugin instances per tenant. Each tenant's PluginManager loads its own plugin instances. Sharing instances across tenants would save memory but complicates state isolation. Independent instances are safer and simpler.
-
Global config as
team_id = NULL. No special IDs or magic values. The global config is a regular row inplugin_manager_configswithteam_id = NULL. Queries load global + tenant and merge. -
Tenant settings can override global settings. Non-plugin settings (server_settings, plugin_dirs, etc.) follow the same merge rule as plugins: tenant takes precedence when present, otherwise falls back to global.
-
load_global_config()andload_tenant_config()are public. They're the primary extension points, designed to be overridden by subclasses. No underscore prefix — Python convention for template methods intended for override. Both default to returning empty config. The framework never loads config itself — it always calls these methods, which subclasses implement for their data source.
Open Questions
-
User-level plugins? The design supports it (tenant_id could be a user ID), but it creates an explosion of PluginManager instances. A user who needs custom behavior can be modeled as a team-of-one. Worth deferring unless there's a concrete use case.
-
Lazy vs. eager tenant initialization? Current design is lazy (create manager on first request). Eager (create all known tenant managers at startup) trades startup time for first-request latency. Lazy is likely better since not all teams may be active.
-
Config change notification — local and distributed.
reload_tenant()andreload_global()are local operations — they reload the calling instance only. In multi-instance deployments (multiple replicas behind a load balancer), the host application must propagate change events to other instances. The framework exposes the reload methods; the distribution mechanism (Redis pub/sub, database polling, PostgreSQL LISTEN/NOTIFY, webhooks) is the host application's concern. See the ContextForge proposal for a concrete Redis pub/sub implementation. -
Config validation on write. When plugin config is written to the database, should the framework validate it? The
PluginConfigpydantic model provides validation. The host application should validate before persisting — the framework validates again when loading. -
Tenant eviction. If a tenant's PluginManager hasn't been accessed in a while, should it be evicted to free resources? Could use a TTL-based eviction policy on
_managers. Not critical for initial implementation but worth considering for deployments with many teams.