From 65f7bc0fabde3ae710a8f90205756bc39d581c12 Mon Sep 17 00:00:00 2001 From: JR Boos Date: Fri, 27 Mar 2026 08:57:10 -0400 Subject: [PATCH] (lcore-1251) added tls e2e tests (lcore-1251) fixed tls tests & removed other e2e tests for quicker test running (lcore-1251) restored test_list.txt (lcore-1251) use `trustme` for certs (lcore-1251) quick tls server fix (lcore-1251) removed tags in place of steps (fix) removed unused code fix tls config verified correct llm response clean LCORE-1253: Add e2e proxy and TLS networking tests Add comprehensive end-to-end tests verifying that Llama Stack's NetworkConfig (proxy, TLS) works correctly through the Lightspeed Stack pipeline. Test infrastructure: - TunnelProxy: Async HTTP CONNECT tunnel proxy that creates TCP tunnels for HTTPS traffic. Tracks CONNECT count and target hosts. - InterceptionProxy: Async TLS-intercepting (MITM) proxy using trustme CA to generate per-target server certificates. Simulates corporate SSL inspection proxies. Behave scenarios (tests/e2e/features/proxy.feature): - Tunnel proxy: Configures run.yaml with NetworkConfig proxy pointing to a local tunnel proxy. Verifies CONNECT to api.openai.com:443 is observed and the LLM query succeeds through the proxy. - Interception proxy: Configures run.yaml with proxy and custom CA cert (trustme). Verifies TLS interception of api.openai.com traffic and successful LLM query through the MITM proxy. - TLS version: Configures run.yaml with min_version TLSv1.2 and verifies the LLM query succeeds with the TLS constraint. Each scenario dynamically generates a modified run-ci.yaml with the appropriate NetworkConfig, restarts Llama Stack with the new config, restarts the Lightspeed Stack, and sends a query to verify the full pipeline. Added trustme>=1.2.1 to dev dependencies. LCORE-1253: Add negative tests, TLS/cipher scenarios, and cleanup hooks Expand proxy e2e test coverage to fully address all acceptance criteria: AC1 (tunnel proxy): - Add negative test: LLM query fails gracefully when proxy is unreachable AC2 (interception proxy with CA): - Add negative test: LLM query fails when interception proxy CA is not provided (verifies "only successful when correct CA is provided") AC3 (TLS version and ciphers): - Add TLSv1.3 minimum version scenario - Add custom cipher suite configuration scenario (ECDHE+AESGCM:DHE+AESGCM) Test infrastructure: - Add after_scenario cleanup hook in environment.py that restores original Llama Stack and Lightspeed Stack configs after @Proxy scenarios. Prevents config leaks between scenarios. - Use different ports for each interception proxy instance to avoid address-already-in-use errors in sequential scenarios. Documentation: - Update docs/e2e_scenarios.md with all 7 proxy test scenarios. - Update docs/e2e_testing.md with proxy-related Behave tags (@Proxy, @TunnelProxy, @InterceptionProxy, @TLSVersion, @TLSCipher). LCORE-1253: Address review feedback Changes requested by reviewer (tisnik) and CodeRabbit: - Detect Docker mode once in before_all and store as context.is_docker_mode. All proxy step functions now use the context attribute instead of calling _is_docker_mode() repeatedly. - Log exception in _restore_original_services instead of silently swallowing it. - Only clear context.services_modified on successful restoration, not when restoration fails (prevents leaking modified state). - Add 10-second timeout to tunnel proxy open_connection to prevent stalls on unreachable targets. - Handle malformed CONNECT port with ValueError catch and 400 response. LCORE-1253: Replace tag-based cleanup with Background restore step Move config restoration from @Proxy after_scenario hook to an explicit Background Given step. This follows the team convention that tags are used only for test selection (filtering), not for triggering behavior. The Background step "The original Llama Stack config is restored if modified" runs before every scenario. If a previous scenario left a modified run.yaml (detected by backup file existence), it restores the original and restarts services. This handles cleanup even when the previous scenario failed mid-way. Removed: - @Proxy tag from feature file (was triggering after_scenario hook) - after_scenario hook for @Proxy in environment.py - _restore_original_services function (replaced by Background step) - context.services_modified tracking (no hook reads it) Updated docs/e2e_testing.md: tags documented as selection-only, not behavior-triggering. LCORE-1253: Address radofuchs review feedback Rewrite proxy e2e tests to follow project conventions: - Reuse existing step definitions: use "I use query to ask question" from llm_query_response.py and "The status code of the response is" from common_http.py instead of custom query/response steps. - Split service restart into two explicit Given steps: "Llama Stack is restarted" and "Lightspeed Stack is restarted" so restart ordering is visible in the feature file. - Remove local (non-Docker) mode code path. Proxy tests use restart_container() exclusively, consistent with the rest of the e2e test suite. - Check specific status code 500 for error scenarios instead of the broad >= 400 range. - Remove custom send_query, verify_llm_response, and verify_error_response steps that duplicated existing functionality. Net reduction: -183 lines from step definitions. LCORE-1253: Clean up proxy servers between scenarios Stop proxy servers and their event loops explicitly in the Background restore step. Previously, proxy daemon threads were left running after each scenario, causing asyncio "Task was destroyed but it is pending" warnings at process exit. The _stop_proxy helper schedules an async stop on the proxy's event loop, waits for it to complete, then stops the loop. Context references are cleared so the next scenario starts clean. LCORE-1253: Stop proxy servers after last scenario in after_feature Add proxy cleanup in after_feature to stop proxy servers left running from the last scenario. The Background restore step handles cleanup between scenarios, but the last scenario's proxies persist until process exit, causing asyncio "Task was destroyed" warnings. The cleanup checks for proxy objects on context (no tag check needed) and calls _stop_proxy to gracefully shut down the event loops. fixed dup steps addressed comments debug fix fix readded tests to test fix --- docker-compose-library.yaml | 28 ++- docker-compose.yaml | 25 ++ .../library-mode/lightspeed-stack-tls.yaml | 21 ++ .../server-mode/lightspeed-stack-tls.yaml | 22 ++ tests/e2e/features/environment.py | 9 + tests/e2e/features/steps/proxy.py | 18 +- tests/e2e/features/steps/tls.py | 226 ++++++++++++++++++ tests/e2e/features/tls.feature | 61 +++++ .../e2e/mock_tls_inference_server/Dockerfile | 14 ++ tests/e2e/mock_tls_inference_server/server.py | 216 +++++++++++++++++ tests/e2e/test_list.txt | 1 + 11 files changed, 637 insertions(+), 4 deletions(-) create mode 100644 tests/e2e/configuration/library-mode/lightspeed-stack-tls.yaml create mode 100644 tests/e2e/configuration/server-mode/lightspeed-stack-tls.yaml create mode 100644 tests/e2e/features/steps/tls.py create mode 100644 tests/e2e/features/tls.feature create mode 100644 tests/e2e/mock_tls_inference_server/Dockerfile create mode 100644 tests/e2e/mock_tls_inference_server/server.py diff --git a/docker-compose-library.yaml b/docker-compose-library.yaml index 52482b18b..cf9374eed 100644 --- a/docker-compose-library.yaml +++ b/docker-compose-library.yaml @@ -30,6 +30,8 @@ services: condition: service_healthy mock-mcp: condition: service_healthy + mock-tls-inference: + condition: service_healthy networks: - lightspeednet volumes: @@ -40,6 +42,7 @@ services: - ./tests/e2e/rag:/opt/app-root/src/.llama/storage/rag:Z - ./tests/e2e/secrets/mcp-token:/tmp/mcp-token:ro - ./tests/e2e/secrets/invalid-mcp-token:/tmp/invalid-mcp-token:ro + - mock-tls-certs:/certs:ro environment: # LLM Provider API Keys - BRAVE_SEARCH_API_KEY=${BRAVE_SEARCH_API_KEY:-} @@ -113,7 +116,30 @@ services: retries: 3 start_period: 2s + # Mock TLS inference server for TLS E2E tests + mock-tls-inference: + build: + context: ./tests/e2e/mock_tls_inference_server + dockerfile: Dockerfile + container_name: mock-tls-inference + ports: + - "8443:8443" + - "8444:8444" + networks: + - lightspeednet + volumes: + - mock-tls-certs:/certs + healthcheck: + test: ["CMD", "python", "-c", "import urllib.request,ssl;c=ssl.create_default_context();c.check_hostname=False;c.verify_mode=ssl.CERT_NONE;urllib.request.urlopen('https://localhost:8443/health',context=c)"] + interval: 5s + timeout: 3s + retries: 3 + start_period: 5s + networks: lightspeednet: - driver: bridge \ No newline at end of file + driver: bridge + +volumes: + mock-tls-certs: \ No newline at end of file diff --git a/docker-compose.yaml b/docker-compose.yaml index 1de76cdb3..6810d3660 100644 --- a/docker-compose.yaml +++ b/docker-compose.yaml @@ -25,12 +25,16 @@ services: container_name: llama-stack ports: - "8321:8321" # Expose llama-stack on 8321 (adjust if needed) + depends_on: + mock-tls-inference: + condition: service_healthy volumes: - ./run.yaml:/opt/app-root/run.yaml:z - ${GCP_KEYS_PATH:-./tmp/.gcp-keys-dummy}:/opt/app-root/.gcp-keys:ro - ./lightspeed-stack.yaml:/opt/app-root/lightspeed-stack.yaml:ro - llama-storage:/opt/app-root/src/.llama/storage - ./tests/e2e/rag:/opt/app-root/src/.llama/storage/rag:z + - mock-tls-certs:/certs:ro environment: - BRAVE_SEARCH_API_KEY=${BRAVE_SEARCH_API_KEY:-} - TAVILY_SEARCH_API_KEY=${TAVILY_SEARCH_API_KEY:-} @@ -140,9 +144,30 @@ services: retries: 3 start_period: 2s + # Mock TLS inference server for TLS E2E tests + mock-tls-inference: + build: + context: ./tests/e2e/mock_tls_inference_server + dockerfile: Dockerfile + container_name: mock-tls-inference + ports: + - "8443:8443" + - "8444:8444" + networks: + - lightspeednet + volumes: + - mock-tls-certs:/certs + healthcheck: + test: ["CMD", "python", "-c", "import urllib.request,ssl;c=ssl.create_default_context();c.check_hostname=False;c.verify_mode=ssl.CERT_NONE;urllib.request.urlopen('https://localhost:8443/health',context=c)"] + interval: 5s + timeout: 3s + retries: 3 + start_period: 5s + volumes: llama-storage: + mock-tls-certs: networks: lightspeednet: diff --git a/tests/e2e/configuration/library-mode/lightspeed-stack-tls.yaml b/tests/e2e/configuration/library-mode/lightspeed-stack-tls.yaml new file mode 100644 index 000000000..438ddcc9e --- /dev/null +++ b/tests/e2e/configuration/library-mode/lightspeed-stack-tls.yaml @@ -0,0 +1,21 @@ +name: Lightspeed Core Service (LCS) +service: + host: 0.0.0.0 + port: 8080 + auth_enabled: false + workers: 1 + color_log: true + access_log: true +llama_stack: + use_as_library_client: true + library_client_config_path: run.yaml +user_data_collection: + feedback_enabled: true + feedback_storage: "/tmp/data/feedback" + transcripts_enabled: true + transcripts_storage: "/tmp/data/transcripts" +authentication: + module: "noop" +inference: + default_provider: tls-openai + default_model: mock-tls-model diff --git a/tests/e2e/configuration/server-mode/lightspeed-stack-tls.yaml b/tests/e2e/configuration/server-mode/lightspeed-stack-tls.yaml new file mode 100644 index 000000000..babdc2b99 --- /dev/null +++ b/tests/e2e/configuration/server-mode/lightspeed-stack-tls.yaml @@ -0,0 +1,22 @@ +name: Lightspeed Core Service (LCS) +service: + host: 0.0.0.0 + port: 8080 + auth_enabled: false + workers: 1 + color_log: true + access_log: true +llama_stack: + use_as_library_client: false + url: http://llama-stack:8321 + api_key: xyzzy +user_data_collection: + feedback_enabled: true + feedback_storage: "/tmp/data/feedback" + transcripts_enabled: true + transcripts_storage: "/tmp/data/transcripts" +authentication: + module: "noop" +inference: + default_provider: tls-openai + default_model: mock-tls-model diff --git a/tests/e2e/features/environment.py b/tests/e2e/features/environment.py index c42a2f3e4..72780030c 100644 --- a/tests/e2e/features/environment.py +++ b/tests/e2e/features/environment.py @@ -552,6 +552,15 @@ def after_feature(context: Context, feature: Feature) -> None: restart_container("lightspeed-stack") remove_config_backup(context.default_config_backup) + # Restore Lightspeed Stack config if TLS Background step switched it + if getattr(context, "tls_config_active", False): + switch_config(context.default_config_backup) + remove_config_backup(context.default_config_backup) + if not context.is_library_mode: + restart_container("llama-stack") + restart_container("lightspeed-stack") + context.tls_config_active = False + # Clean up any proxy servers left from the last scenario if hasattr(context, "tunnel_proxy") or hasattr(context, "interception_proxy"): from tests.e2e.features.steps.proxy import _stop_proxy diff --git a/tests/e2e/features/steps/proxy.py b/tests/e2e/features/steps/proxy.py index 7801b60b2..5c550edd5 100644 --- a/tests/e2e/features/steps/proxy.py +++ b/tests/e2e/features/steps/proxy.py @@ -137,10 +137,22 @@ def restore_if_modified(context: Context) -> None: _stop_proxy(context, "tunnel_proxy", "proxy_loop") _stop_proxy(context, "interception_proxy", "interception_proxy_loop") + # Check for backups from both proxy and TLS scenarios + _LLAMA_STACK_TLS_BACKUP = "run.yaml.tls-backup" + backup_to_restore = None if os.path.exists(_LLAMA_STACK_CONFIG_BACKUP): - print("Restoring original Llama Stack config from backup...") - shutil.copy(_LLAMA_STACK_CONFIG_BACKUP, _LLAMA_STACK_CONFIG) - os.remove(_LLAMA_STACK_CONFIG_BACKUP) + backup_to_restore = _LLAMA_STACK_CONFIG_BACKUP + elif os.path.exists(_LLAMA_STACK_TLS_BACKUP): + backup_to_restore = _LLAMA_STACK_TLS_BACKUP + + if backup_to_restore: + print(f"Restoring original Llama Stack config from {backup_to_restore}...") + shutil.copy(backup_to_restore, _LLAMA_STACK_CONFIG) + os.remove(backup_to_restore) + # Clean up the other backup too if it exists + for other_backup in [_LLAMA_STACK_CONFIG_BACKUP, _LLAMA_STACK_TLS_BACKUP]: + if other_backup != backup_to_restore and os.path.exists(other_backup): + os.remove(other_backup) restart_container("llama-stack") restart_container("lightspeed-stack") diff --git a/tests/e2e/features/steps/tls.py b/tests/e2e/features/steps/tls.py new file mode 100644 index 000000000..cd891abbb --- /dev/null +++ b/tests/e2e/features/steps/tls.py @@ -0,0 +1,226 @@ +"""Step definitions for TLS configuration e2e tests. + +These tests configure Llama Stack's run.yaml with NetworkConfig TLS settings +and verify the full pipeline works through the Lightspeed Stack. + +Config switching uses the same pattern as other e2e tests: overwrite the +host-mounted run.yaml and restart Docker containers. Cleanup is handled +by a Background step that restores the backup before each scenario. +""" + +import copy +import os +import shutil + +import yaml +from behave import given # pyright: ignore[reportAttributeAccessIssue] +from behave.runner import Context + +from tests.e2e.utils.utils import ( + create_config_backup, + restart_container, + switch_config, +) + +# Llama Stack config — mounted into the container from the host +_LLAMA_STACK_CONFIG = "run.yaml" +_LLAMA_STACK_CONFIG_BACKUP = "run.yaml.tls-backup" + +_LIGHTSPEED_STACK_CONFIG = "lightspeed-stack.yaml" + + +def _load_llama_config() -> dict: + """Load the base Llama Stack run config. + + Returns: + The parsed YAML configuration as a dictionary. + """ + with open(_LLAMA_STACK_CONFIG, encoding="utf-8") as f: + return yaml.safe_load(f) + + +def _write_config(config: dict, path: str) -> None: + """Write a YAML config file. + + Parameters: + config: The configuration dictionary to write. + path: The file path to write to. + """ + with open(path, "w", encoding="utf-8") as f: + yaml.dump(config, f, default_flow_style=False) + + +_TLS_PROVIDER_BASE: dict = { + "provider_id": "tls-openai", + "provider_type": "remote::openai", + "config": { + "api_key": "test-key", + "base_url": "https://mock-tls-inference:8443/v1", + "allowed_models": ["mock-tls-model"], + }, +} + +_TLS_MODEL_RESOURCE: dict = { + "model_id": "mock-tls-model", + "provider_id": "tls-openai", + "provider_model_id": "mock-tls-model", +} + + +def _ensure_tls_provider(config: dict) -> dict: + """Find or create the tls-openai inference provider in the config. + + If the provider does not exist, it is added along with the + mock-tls-model registered resource. + + Parameters: + config: The Llama Stack configuration dictionary. + + Returns: + The tls-openai provider configuration dictionary. + """ + providers = config.setdefault("providers", {}) + inference = providers.setdefault("inference", []) + + for provider in inference: + if provider.get("provider_id") == "tls-openai": + return provider + + # Provider not found — add it + provider = copy.deepcopy(_TLS_PROVIDER_BASE) + inference.append(provider) + + # Also register the model resource + resources = config.setdefault("registered_resources", {}) + models = resources.setdefault("models", []) + if not any(m.get("model_id") == "mock-tls-model" for m in models): + models.append(copy.deepcopy(_TLS_MODEL_RESOURCE)) + + return provider + + +def _backup_llama_config() -> None: + """Create a backup of the current run.yaml if not already backed up.""" + if not os.path.exists(_LLAMA_STACK_CONFIG_BACKUP): + shutil.copy(_LLAMA_STACK_CONFIG, _LLAMA_STACK_CONFIG_BACKUP) + + +def _prepare_tls_provider() -> tuple[dict, dict]: + """Back up run.yaml, load it, ensure the TLS provider exists, and init network config. + + Returns: + A tuple of (full config dict, provider's network config dict). + """ + _backup_llama_config() + config = _load_llama_config() + provider = _ensure_tls_provider(config) + provider.setdefault("config", {}).setdefault("network", {}) + return config, provider + + +# --- Background Steps --- +# Restart steps ("The original Llama Stack config is restored if modified", +# "Llama Stack is restarted", "Lightspeed Stack is restarted") are defined in +# proxy.py and shared across features by behave. + + +@given("Lightspeed Stack is configured for TLS testing") +def configure_lightspeed_for_tls(context: Context) -> None: + """Switch lightspeed-stack.yaml to the TLS test configuration. + + Backs up the current config and switches to the TLS variant that sets + default_provider to tls-openai and default_model to mock-tls-model. + The backup is restored in after_scenario via the shared restore step. + + Parameters: + context: Behave test context. + """ + mode_dir = "library-mode" if context.is_library_mode else "server-mode" + tls_config = f"tests/e2e/configuration/{mode_dir}/lightspeed-stack-tls.yaml" + + if not hasattr(context, "default_config_backup"): + context.default_config_backup = create_config_backup(_LIGHTSPEED_STACK_CONFIG) + + switch_config(tls_config) + restart_container("lightspeed-stack") + context.tls_config_active = True + + +# --- TLS Configuration Steps --- + + +@given("Llama Stack is configured with TLS verification disabled") +def configure_tls_verify_false(context: Context) -> None: + """Configure run.yaml with TLS verify: false. + + Parameters: + context: Behave test context. + """ + config, provider = _prepare_tls_provider() + provider["config"]["network"]["tls"] = {"verify": False} + _write_config(config, _LLAMA_STACK_CONFIG) + + +@given("Llama Stack is configured with CA certificate verification") +def configure_tls_verify_ca(context: Context) -> None: + """Configure run.yaml with TLS verify: /certs/ca.crt. + + Parameters: + context: Behave test context. + """ + config, provider = _prepare_tls_provider() + provider["config"]["network"]["tls"] = { + "verify": "/certs/ca.crt", + "min_version": "TLSv1.2", + } + _write_config(config, _LLAMA_STACK_CONFIG) + + +@given("Llama Stack is configured with TLS verification enabled") +def configure_tls_verify_true(context: Context) -> None: + """Configure run.yaml with TLS verify: true. + + This should fail when connecting to a self-signed certificate server. + + Parameters: + context: Behave test context. + """ + config, provider = _prepare_tls_provider() + provider["config"]["network"]["tls"] = {"verify": True} + _write_config(config, _LLAMA_STACK_CONFIG) + + +@given("Llama Stack is configured with mutual TLS authentication") +def configure_tls_mtls(context: Context) -> None: + """Configure run.yaml with mutual TLS (client cert and key). + + Parameters: + context: Behave test context. + """ + config, provider = _prepare_tls_provider() + + # Update base_url to use the mTLS server port + provider["config"]["base_url"] = "https://mock-tls-inference:8444/v1" + + provider["config"]["network"]["tls"] = { + "verify": "/certs/ca.crt", + "client_cert": "/certs/client.crt", + "client_key": "/certs/client.key", + } + _write_config(config, _LLAMA_STACK_CONFIG) + + +@given('Llama Stack is configured with TLS minimum version "{version}"') +def configure_tls_min_version(context: Context, version: str) -> None: + """Configure run.yaml with TLS minimum version. + + Parameters: + context: Behave test context. + version: The TLS version (e.g., "TLSv1.2", "TLSv1.3"). + """ + config, provider = _prepare_tls_provider() + provider["config"]["network"]["tls"] = { + "verify": "/certs/ca.crt", + "min_version": version, + } + _write_config(config, _LLAMA_STACK_CONFIG) diff --git a/tests/e2e/features/tls.feature b/tests/e2e/features/tls.feature new file mode 100644 index 000000000..5c47d63b8 --- /dev/null +++ b/tests/e2e/features/tls.feature @@ -0,0 +1,61 @@ +@skip-in-library-mode +Feature: TLS configuration for remote inference providers + Validate that Llama Stack's NetworkConfig.tls settings are applied correctly + when connecting to a remote inference provider over HTTPS. + + Background: + Given The service is started locally + And REST API service prefix is /v1 + And Lightspeed Stack is configured for TLS testing + And The original Llama Stack config is restored if modified + + Scenario: Inference succeeds with TLS verification disabled + Given Llama Stack is configured with TLS verification disabled + And Llama Stack is restarted + And Lightspeed Stack is restarted + When I use "query" to ask question + """ + {"query": "Say hello", "model": "mock-tls-model", "provider": "tls-openai"} + """ + Then The status code of the response is 200 + + Scenario: Inference succeeds with CA certificate verification + Given Llama Stack is configured with CA certificate verification + And Llama Stack is restarted + And Lightspeed Stack is restarted + When I use "query" to ask question + """ + {"query": "Say hello", "model": "mock-tls-model", "provider": "tls-openai"} + """ + Then The status code of the response is 200 + + Scenario: Inference fails when TLS verify is true against self-signed cert + Given Llama Stack is configured with TLS verification enabled + And Llama Stack is restarted + And Lightspeed Stack is restarted + When I use "query" to ask question + """ + {"query": "Say hello", "model": "mock-tls-model", "provider": "tls-openai"} + """ + Then The status code of the response is 500 + And The body of the response does not contain Hello from the TLS mock inference server + + Scenario: Inference succeeds with mutual TLS authentication + Given Llama Stack is configured with mutual TLS authentication + And Llama Stack is restarted + And Lightspeed Stack is restarted + When I use "query" to ask question + """ + {"query": "Say hello", "model": "mock-tls-model", "provider": "tls-openai"} + """ + Then The status code of the response is 200 + + Scenario: Inference succeeds with TLS minimum version TLSv1.3 + Given Llama Stack is configured with TLS minimum version "TLSv1.3" + And Llama Stack is restarted + And Lightspeed Stack is restarted + When I use "query" to ask question + """ + {"query": "Say hello", "model": "mock-tls-model", "provider": "tls-openai"} + """ + Then The status code of the response is 200 diff --git a/tests/e2e/mock_tls_inference_server/Dockerfile b/tests/e2e/mock_tls_inference_server/Dockerfile new file mode 100644 index 000000000..ee9cbde16 --- /dev/null +++ b/tests/e2e/mock_tls_inference_server/Dockerfile @@ -0,0 +1,14 @@ +FROM python:3.12-slim +WORKDIR /app + +# Install trustme for dynamic certificate generation +RUN pip install --no-cache-dir trustme + +# Copy server script +COPY server.py . + +# Create /certs directory for generated certificates +RUN mkdir -p /certs + +EXPOSE 8443 8444 +CMD ["python", "server.py"] diff --git a/tests/e2e/mock_tls_inference_server/server.py b/tests/e2e/mock_tls_inference_server/server.py new file mode 100644 index 000000000..75cec1637 --- /dev/null +++ b/tests/e2e/mock_tls_inference_server/server.py @@ -0,0 +1,216 @@ +#!/usr/bin/env python3 +"""Mock OpenAI-compatible HTTPS inference server for TLS e2e testing. + +Serves two HTTPS listeners using trustme-generated test certificates: + - Port 8443: standard TLS (no client certificate required) + - Port 8444: mutual TLS (client certificate required, verified against CA) + +Implements the minimal OpenAI API surface needed by Llama Stack's +remote::openai provider: /v1/models and /v1/chat/completions. + +Certificates are generated on-the-fly using trustme at server startup. +""" + +import json +import ssl +import sys +import threading +import time +from http.server import BaseHTTPRequestHandler, ThreadingHTTPServer +from pathlib import Path +from typing import Any + +import trustme + +MODEL_ID = "mock-tls-model" + + +class OpenAIHandler(BaseHTTPRequestHandler): + """Handles OpenAI-compatible API requests over HTTPS.""" + + def log_message( + self, format: str, *args: Any + ) -> None: # pylint: disable=redefined-builtin + """Timestamp log output.""" + print(f"[{time.strftime('%Y-%m-%d %H:%M:%S')}] {format % args}") + + def do_GET(self) -> None: # pylint: disable=invalid-name + """Handle GET requests.""" + if self.path == "/health": + self._send_json({"status": "ok"}) + elif self.path == "/v1/models": + self._send_json( + { + "object": "list", + "data": [ + { + "id": MODEL_ID, + "object": "model", + "created": 1700000000, + "owned_by": "test", + } + ], + } + ) + else: + self.send_error(404) + + def do_POST(self) -> None: # pylint: disable=invalid-name + """Handle POST requests (chat completions).""" + if self.path != "/v1/chat/completions": + self.send_error(404) + return + + content_length = int(self.headers.get("Content-Length", 0)) + body = self.rfile.read(content_length) if content_length > 0 else b"{}" + + try: + request_data = json.loads(body.decode("utf-8")) + except (json.JSONDecodeError, UnicodeDecodeError): + request_data = {} + + model = request_data.get("model", MODEL_ID) + + self._send_json( + { + "id": "chatcmpl-tls-test-001", + "object": "chat.completion", + "created": 1700000000, + "model": model, + "choices": [ + { + "index": 0, + "message": { + "role": "assistant", + "content": "Hello from the TLS mock inference server.", + }, + "finish_reason": "stop", + } + ], + "usage": { + "prompt_tokens": 8, + "completion_tokens": 9, + "total_tokens": 17, + }, + } + ) + + def _send_json(self, data: dict | list) -> None: + """Write a JSON response.""" + payload = json.dumps(data).encode() + self.send_response(200) + self.send_header("Content-Type", "application/json") + self.send_header("Content-Length", str(len(payload))) + self.end_headers() + self.wfile.write(payload) + + +def _make_tls_context( + ca: trustme.CA, + server_cert: trustme.LeafCert, + require_client_cert: bool = False, +) -> ssl.SSLContext: + """Build an SSL context using trustme-generated certificates. + + Parameters: + ca: The trustme CA instance. + server_cert: The server certificate issued by the CA. + require_client_cert: Whether to require client certificate (mTLS). + + Returns: + Configured SSL context for server-side TLS. + """ + ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) + server_cert.configure_cert(ctx) + if require_client_cert: + ctx.verify_mode = ssl.CERT_REQUIRED + ca.configure_trust(ctx) + return ctx + + +def _run_server(httpd: ThreadingHTTPServer, label: str) -> None: + """Serve requests forever in a daemon thread.""" + print(f"{label} listening") + try: + httpd.serve_forever() + except Exception as exc: # pylint: disable=broad-except + print(f"{label} error: {exc}") + + +def main() -> None: + """Start standard-TLS (8443) and mTLS (8444) listeners. + + Generates certificates on-the-fly using trustme and exports the CA cert + to /certs/ca.crt and client cert to /certs/client.* for use by tests. + """ + tls_port = int(sys.argv[1]) if len(sys.argv) > 1 else 8443 + mtls_port = int(sys.argv[2]) if len(sys.argv) > 2 else 8444 + + print("=" * 60) + print("Generating TLS certificates with trustme...") + print("=" * 60) + + # Generate CA and certificates + ca = trustme.CA() + # Server cert with SANs for Docker service name and localhost + server_cert = ca.issue_cert("mock-tls-inference", "localhost", "127.0.0.1") + # Client cert for mTLS testing (use a simple hostname without spaces) + client_cert = ca.issue_cert("tls-e2e-test-client") + + # Export certificates to /certs directory for access by tests + certs_dir = Path("/certs") + certs_dir.mkdir(exist_ok=True, parents=True) + + # Export CA certificate + ca.cert_pem.write_to_path(str(certs_dir / "ca.crt")) + print(f" CA cert: {certs_dir / 'ca.crt'}") + + # Export client certificate and key for mTLS tests + client_cert.private_key_pem.write_to_path(str(certs_dir / "client.key")) + # Write certificate chain (may include multiple certs) + with (certs_dir / "client.crt").open("wb") as f: + for blob in client_cert.cert_chain_pems: + f.write(blob.bytes()) + print(f" Client cert: {certs_dir / 'client.crt'}") + print(f" Client key: {certs_dir / 'client.key'}") + + print("=" * 60) + print("Starting servers...") + print("=" * 60) + + # Create TLS server (no client cert required) + tls_server = ThreadingHTTPServer(("", tls_port), OpenAIHandler) + tls_ctx = _make_tls_context(ca, server_cert, require_client_cert=False) + tls_server.socket = tls_ctx.wrap_socket(tls_server.socket, server_side=True) + + # Create mTLS server (client cert required) + mtls_server = ThreadingHTTPServer(("", mtls_port), OpenAIHandler) + mtls_ctx = _make_tls_context(ca, server_cert, require_client_cert=True) + mtls_server.socket = mtls_ctx.wrap_socket(mtls_server.socket, server_side=True) + + print("=" * 60) + print("Mock TLS Inference Server") + print("=" * 60) + print(f" TLS : https://localhost:{tls_port} (no client cert)") + print(f" mTLS : https://localhost:{mtls_port} (client cert required)") + print(f" Model: {MODEL_ID}") + print("=" * 60) + + for srv, label in [ + (tls_server, f"TLS :{tls_port}"), + (mtls_server, f"mTLS :{mtls_port}"), + ]: + t = threading.Thread(target=_run_server, args=(srv, label), daemon=True) + t.start() + + try: + while True: + time.sleep(3600) + except KeyboardInterrupt: + print("\nShutting down...") + tls_server.shutdown() + mtls_server.shutdown() + + +if __name__ == "__main__": + main() diff --git a/tests/e2e/test_list.txt b/tests/e2e/test_list.txt index 83ed6a17f..4668a551b 100644 --- a/tests/e2e/test_list.txt +++ b/tests/e2e/test_list.txt @@ -20,3 +20,4 @@ features/rest_api.feature features/mcp.feature features/models.feature features/proxy.feature +features/tls.feature \ No newline at end of file