-
Notifications
You must be signed in to change notification settings - Fork 79
LCORE-1251: Added TLS E2E Tests #1413
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change | ||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
|
@@ -25,12 +25,16 @@ services: | |||||||||||||
| container_name: llama-stack | ||||||||||||||
| ports: | ||||||||||||||
| - "8321:8321" # Expose llama-stack on 8321 (adjust if needed) | ||||||||||||||
| depends_on: | ||||||||||||||
| mock-tls-inference: | ||||||||||||||
| condition: service_healthy | ||||||||||||||
| volumes: | ||||||||||||||
| - ./run.yaml:/opt/app-root/run.yaml:z | ||||||||||||||
| - ${GCP_KEYS_PATH:-./tmp/.gcp-keys-dummy}:/opt/app-root/.gcp-keys:ro | ||||||||||||||
| - ./lightspeed-stack.yaml:/opt/app-root/lightspeed-stack.yaml:ro | ||||||||||||||
| - llama-storage:/opt/app-root/src/.llama/storage | ||||||||||||||
| - ./tests/e2e/rag:/opt/app-root/src/.llama/storage/rag:z | ||||||||||||||
| - mock-tls-certs:/certs:ro | ||||||||||||||
| environment: | ||||||||||||||
| - BRAVE_SEARCH_API_KEY=${BRAVE_SEARCH_API_KEY:-} | ||||||||||||||
| - TAVILY_SEARCH_API_KEY=${TAVILY_SEARCH_API_KEY:-} | ||||||||||||||
|
|
@@ -140,9 +144,30 @@ services: | |||||||||||||
| retries: 3 | ||||||||||||||
| start_period: 2s | ||||||||||||||
|
|
||||||||||||||
| # Mock TLS inference server for TLS E2E tests | ||||||||||||||
| mock-tls-inference: | ||||||||||||||
| build: | ||||||||||||||
| context: ./tests/e2e/mock_tls_inference_server | ||||||||||||||
| dockerfile: Dockerfile | ||||||||||||||
| container_name: mock-tls-inference | ||||||||||||||
| ports: | ||||||||||||||
| - "8443:8443" | ||||||||||||||
| - "8444:8444" | ||||||||||||||
|
Comment on lines
+153
to
+155
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Don't publish the mock TLS ports to the host. Everything here talks to ✂️ Proposed fix- ports:
- - "8443:8443"
- - "8444:8444"
+ expose:
+ - "8443"
+ - "8444"📝 Committable suggestion
Suggested change
🤖 Prompt for AI Agents |
||||||||||||||
| networks: | ||||||||||||||
| - lightspeednet | ||||||||||||||
| volumes: | ||||||||||||||
| - mock-tls-certs:/certs | ||||||||||||||
| healthcheck: | ||||||||||||||
| test: ["CMD", "python", "-c", "import urllib.request,ssl;c=ssl.create_default_context();c.check_hostname=False;c.verify_mode=ssl.CERT_NONE;urllib.request.urlopen('https://localhost:8443/health',context=c)"] | ||||||||||||||
| interval: 5s | ||||||||||||||
| timeout: 3s | ||||||||||||||
| retries: 3 | ||||||||||||||
| start_period: 5s | ||||||||||||||
|
|
||||||||||||||
|
|
||||||||||||||
| volumes: | ||||||||||||||
| llama-storage: | ||||||||||||||
| mock-tls-certs: | ||||||||||||||
|
|
||||||||||||||
| networks: | ||||||||||||||
| lightspeednet: | ||||||||||||||
|
|
||||||||||||||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,21 @@ | ||
| name: Lightspeed Core Service (LCS) | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. create a version if this config also in prow env |
||
| service: | ||
| host: 0.0.0.0 | ||
| port: 8080 | ||
| auth_enabled: false | ||
| workers: 1 | ||
| color_log: true | ||
| access_log: true | ||
| llama_stack: | ||
| use_as_library_client: true | ||
| library_client_config_path: run.yaml | ||
| user_data_collection: | ||
| feedback_enabled: true | ||
| feedback_storage: "/tmp/data/feedback" | ||
| transcripts_enabled: true | ||
| transcripts_storage: "/tmp/data/transcripts" | ||
| authentication: | ||
| module: "noop" | ||
| inference: | ||
| default_provider: tls-openai | ||
| default_model: mock-tls-model | ||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,22 @@ | ||
| name: Lightspeed Core Service (LCS) | ||
| service: | ||
| host: 0.0.0.0 | ||
| port: 8080 | ||
| auth_enabled: false | ||
| workers: 1 | ||
| color_log: true | ||
| access_log: true | ||
| llama_stack: | ||
| use_as_library_client: false | ||
| url: http://llama-stack:8321 | ||
| api_key: xyzzy | ||
| user_data_collection: | ||
| feedback_enabled: true | ||
| feedback_storage: "/tmp/data/feedback" | ||
| transcripts_enabled: true | ||
| transcripts_storage: "/tmp/data/transcripts" | ||
| authentication: | ||
| module: "noop" | ||
| inference: | ||
| default_provider: tls-openai | ||
| default_model: mock-tls-model |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,226 @@ | ||
| """Step definitions for TLS configuration e2e tests. | ||
| These tests configure Llama Stack's run.yaml with NetworkConfig TLS settings | ||
| and verify the full pipeline works through the Lightspeed Stack. | ||
| Config switching uses the same pattern as other e2e tests: overwrite the | ||
| host-mounted run.yaml and restart Docker containers. Cleanup is handled | ||
| by a Background step that restores the backup before each scenario. | ||
| """ | ||
|
|
||
| import copy | ||
| import os | ||
| import shutil | ||
|
|
||
| import yaml | ||
| from behave import given # pyright: ignore[reportAttributeAccessIssue] | ||
| from behave.runner import Context | ||
|
|
||
| from tests.e2e.utils.utils import ( | ||
| create_config_backup, | ||
| restart_container, | ||
| switch_config, | ||
| ) | ||
|
|
||
| # Llama Stack config — mounted into the container from the host | ||
| _LLAMA_STACK_CONFIG = "run.yaml" | ||
| _LLAMA_STACK_CONFIG_BACKUP = "run.yaml.tls-backup" | ||
|
|
||
| _LIGHTSPEED_STACK_CONFIG = "lightspeed-stack.yaml" | ||
|
|
||
|
|
||
| def _load_llama_config() -> dict: | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. this logic is already present in different files |
||
| """Load the base Llama Stack run config. | ||
| Returns: | ||
| The parsed YAML configuration as a dictionary. | ||
| """ | ||
| with open(_LLAMA_STACK_CONFIG, encoding="utf-8") as f: | ||
| return yaml.safe_load(f) | ||
|
|
||
|
|
||
| def _write_config(config: dict, path: str) -> None: | ||
| """Write a YAML config file. | ||
| Parameters: | ||
| config: The configuration dictionary to write. | ||
| path: The file path to write to. | ||
| """ | ||
| with open(path, "w", encoding="utf-8") as f: | ||
| yaml.dump(config, f, default_flow_style=False) | ||
|
|
||
|
|
||
| _TLS_PROVIDER_BASE: dict = { | ||
| "provider_id": "tls-openai", | ||
| "provider_type": "remote::openai", | ||
| "config": { | ||
| "api_key": "test-key", | ||
| "base_url": "https://mock-tls-inference:8443/v1", | ||
| "allowed_models": ["mock-tls-model"], | ||
| }, | ||
| } | ||
|
|
||
| _TLS_MODEL_RESOURCE: dict = { | ||
| "model_id": "mock-tls-model", | ||
| "provider_id": "tls-openai", | ||
| "provider_model_id": "mock-tls-model", | ||
| } | ||
|
|
||
|
|
||
| def _ensure_tls_provider(config: dict) -> dict: | ||
| """Find or create the tls-openai inference provider in the config. | ||
| If the provider does not exist, it is added along with the | ||
| mock-tls-model registered resource. | ||
| Parameters: | ||
| config: The Llama Stack configuration dictionary. | ||
| Returns: | ||
| The tls-openai provider configuration dictionary. | ||
| """ | ||
| providers = config.setdefault("providers", {}) | ||
| inference = providers.setdefault("inference", []) | ||
|
|
||
| for provider in inference: | ||
| if provider.get("provider_id") == "tls-openai": | ||
| return provider | ||
|
|
||
| # Provider not found — add it | ||
| provider = copy.deepcopy(_TLS_PROVIDER_BASE) | ||
| inference.append(provider) | ||
|
|
||
| # Also register the model resource | ||
| resources = config.setdefault("registered_resources", {}) | ||
| models = resources.setdefault("models", []) | ||
| if not any(m.get("model_id") == "mock-tls-model" for m in models): | ||
| models.append(copy.deepcopy(_TLS_MODEL_RESOURCE)) | ||
|
|
||
| return provider | ||
|
|
||
|
|
||
| def _backup_llama_config() -> None: | ||
| """Create a backup of the current run.yaml if not already backed up.""" | ||
| if not os.path.exists(_LLAMA_STACK_CONFIG_BACKUP): | ||
| shutil.copy(_LLAMA_STACK_CONFIG, _LLAMA_STACK_CONFIG_BACKUP) | ||
|
|
||
|
|
||
| def _prepare_tls_provider() -> tuple[dict, dict]: | ||
| """Back up run.yaml, load it, ensure the TLS provider exists, and init network config. | ||
| Returns: | ||
| A tuple of (full config dict, provider's network config dict). | ||
| """ | ||
| _backup_llama_config() | ||
| config = _load_llama_config() | ||
| provider = _ensure_tls_provider(config) | ||
| provider.setdefault("config", {}).setdefault("network", {}) | ||
| return config, provider | ||
|
|
||
|
|
||
| # --- Background Steps --- | ||
| # Restart steps ("The original Llama Stack config is restored if modified", | ||
| # "Llama Stack is restarted", "Lightspeed Stack is restarted") are defined in | ||
| # proxy.py and shared across features by behave. | ||
|
|
||
|
|
||
| @given("Lightspeed Stack is configured for TLS testing") | ||
| def configure_lightspeed_for_tls(context: Context) -> None: | ||
| """Switch lightspeed-stack.yaml to the TLS test configuration. | ||
| Backs up the current config and switches to the TLS variant that sets | ||
| default_provider to tls-openai and default_model to mock-tls-model. | ||
| The backup is restored in after_scenario via the shared restore step. | ||
| Parameters: | ||
| context: Behave test context. | ||
| """ | ||
| mode_dir = "library-mode" if context.is_library_mode else "server-mode" | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. this will never work in prow/konflux environment |
||
| tls_config = f"tests/e2e/configuration/{mode_dir}/lightspeed-stack-tls.yaml" | ||
|
|
||
| if not hasattr(context, "default_config_backup"): | ||
| context.default_config_backup = create_config_backup(_LIGHTSPEED_STACK_CONFIG) | ||
|
|
||
| switch_config(tls_config) | ||
| restart_container("lightspeed-stack") | ||
| context.tls_config_active = True | ||
|
|
||
|
|
||
| # --- TLS Configuration Steps --- | ||
|
|
||
|
|
||
| @given("Llama Stack is configured with TLS verification disabled") | ||
| def configure_tls_verify_false(context: Context) -> None: | ||
| """Configure run.yaml with TLS verify: false. | ||
| Parameters: | ||
| context: Behave test context. | ||
| """ | ||
| config, provider = _prepare_tls_provider() | ||
| provider["config"]["network"]["tls"] = {"verify": False} | ||
| _write_config(config, _LLAMA_STACK_CONFIG) | ||
|
|
||
|
|
||
| @given("Llama Stack is configured with CA certificate verification") | ||
| def configure_tls_verify_ca(context: Context) -> None: | ||
| """Configure run.yaml with TLS verify: /certs/ca.crt. | ||
| Parameters: | ||
| context: Behave test context. | ||
| """ | ||
| config, provider = _prepare_tls_provider() | ||
| provider["config"]["network"]["tls"] = { | ||
| "verify": "/certs/ca.crt", | ||
| "min_version": "TLSv1.2", | ||
| } | ||
| _write_config(config, _LLAMA_STACK_CONFIG) | ||
|
|
||
|
|
||
| @given("Llama Stack is configured with TLS verification enabled") | ||
| def configure_tls_verify_true(context: Context) -> None: | ||
| """Configure run.yaml with TLS verify: true. | ||
| This should fail when connecting to a self-signed certificate server. | ||
| Parameters: | ||
| context: Behave test context. | ||
| """ | ||
| config, provider = _prepare_tls_provider() | ||
| provider["config"]["network"]["tls"] = {"verify": True} | ||
| _write_config(config, _LLAMA_STACK_CONFIG) | ||
|
|
||
|
|
||
| @given("Llama Stack is configured with mutual TLS authentication") | ||
| def configure_tls_mtls(context: Context) -> None: | ||
| """Configure run.yaml with mutual TLS (client cert and key). | ||
| Parameters: | ||
| context: Behave test context. | ||
| """ | ||
| config, provider = _prepare_tls_provider() | ||
|
|
||
| # Update base_url to use the mTLS server port | ||
| provider["config"]["base_url"] = "https://mock-tls-inference:8444/v1" | ||
|
|
||
| provider["config"]["network"]["tls"] = { | ||
| "verify": "/certs/ca.crt", | ||
| "client_cert": "/certs/client.crt", | ||
| "client_key": "/certs/client.key", | ||
| } | ||
| _write_config(config, _LLAMA_STACK_CONFIG) | ||
|
|
||
|
|
||
| @given('Llama Stack is configured with TLS minimum version "{version}"') | ||
| def configure_tls_min_version(context: Context, version: str) -> None: | ||
| """Configure run.yaml with TLS minimum version. | ||
| Parameters: | ||
| context: Behave test context. | ||
| version: The TLS version (e.g., "TLSv1.2", "TLSv1.3"). | ||
| """ | ||
| config, provider = _prepare_tls_provider() | ||
| provider["config"]["network"]["tls"] = { | ||
| "verify": "/certs/ca.crt", | ||
| "min_version": version, | ||
| } | ||
| _write_config(config, _LLAMA_STACK_CONFIG) | ||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't publish the mock TLS ports to the host here either.
The library-mode stack reaches
mock-tls-inferenceover the compose network, so these host bindings only make unrelated local runs fail when8443or8444is already occupied.✂️ Proposed fix
📝 Committable suggestion
🤖 Prompt for AI Agents