Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
28 changes: 27 additions & 1 deletion docker-compose-library.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,8 @@ services:
condition: service_healthy
mock-mcp:
condition: service_healthy
mock-tls-inference:
condition: service_healthy
networks:
- lightspeednet
volumes:
Expand All @@ -40,6 +42,7 @@ services:
- ./tests/e2e/rag:/opt/app-root/src/.llama/storage/rag:Z
- ./tests/e2e/secrets/mcp-token:/tmp/mcp-token:ro
- ./tests/e2e/secrets/invalid-mcp-token:/tmp/invalid-mcp-token:ro
- mock-tls-certs:/certs:ro
environment:
# LLM Provider API Keys
- BRAVE_SEARCH_API_KEY=${BRAVE_SEARCH_API_KEY:-}
Expand Down Expand Up @@ -113,7 +116,30 @@ services:
retries: 3
start_period: 2s

# Mock TLS inference server for TLS E2E tests
mock-tls-inference:
build:
context: ./tests/e2e/mock_tls_inference_server
dockerfile: Dockerfile
container_name: mock-tls-inference
ports:
- "8443:8443"
- "8444:8444"
Comment on lines +125 to +127
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don't publish the mock TLS ports to the host here either.

The library-mode stack reaches mock-tls-inference over the compose network, so these host bindings only make unrelated local runs fail when 8443 or 8444 is already occupied.

✂️ Proposed fix
-    ports:
-      - "8443:8443"
-      - "8444:8444"
+    expose:
+      - "8443"
+      - "8444"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
ports:
- "8443:8443"
- "8444:8444"
expose:
- "8443"
- "8444"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docker-compose-library.yaml` around lines 125 - 127, Remove the unnecessary
host port bindings from the mock TLS service so the library-mode stack uses the
compose network only; locate the mock-tls-inference service (look for the
service name "mock-tls-inference") and delete or comment out the ports block
entries "- "8443:8443"" and "- "8444:8444"" (or remove the entire ports: section
if it becomes empty) so the service is reachable only via the compose network
and does not publish ports to the host.

networks:
- lightspeednet
volumes:
- mock-tls-certs:/certs
healthcheck:
test: ["CMD", "python", "-c", "import urllib.request,ssl;c=ssl.create_default_context();c.check_hostname=False;c.verify_mode=ssl.CERT_NONE;urllib.request.urlopen('https://localhost:8443/health',context=c)"]
interval: 5s
timeout: 3s
retries: 3
start_period: 5s


networks:
lightspeednet:
driver: bridge
driver: bridge

volumes:
mock-tls-certs:
25 changes: 25 additions & 0 deletions docker-compose.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -25,12 +25,16 @@ services:
container_name: llama-stack
ports:
- "8321:8321" # Expose llama-stack on 8321 (adjust if needed)
depends_on:
mock-tls-inference:
condition: service_healthy
volumes:
- ./run.yaml:/opt/app-root/run.yaml:z
- ${GCP_KEYS_PATH:-./tmp/.gcp-keys-dummy}:/opt/app-root/.gcp-keys:ro
- ./lightspeed-stack.yaml:/opt/app-root/lightspeed-stack.yaml:ro
- llama-storage:/opt/app-root/src/.llama/storage
- ./tests/e2e/rag:/opt/app-root/src/.llama/storage/rag:z
- mock-tls-certs:/certs:ro
environment:
- BRAVE_SEARCH_API_KEY=${BRAVE_SEARCH_API_KEY:-}
- TAVILY_SEARCH_API_KEY=${TAVILY_SEARCH_API_KEY:-}
Expand Down Expand Up @@ -140,9 +144,30 @@ services:
retries: 3
start_period: 2s

# Mock TLS inference server for TLS E2E tests
mock-tls-inference:
build:
context: ./tests/e2e/mock_tls_inference_server
dockerfile: Dockerfile
container_name: mock-tls-inference
ports:
- "8443:8443"
- "8444:8444"
Comment on lines +153 to +155
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don't publish the mock TLS ports to the host.

Everything here talks to mock-tls-inference over lightspeednet, and the healthcheck runs in-container, so binding 8443/8444 on the host only adds avoidable startup failures when those ports are already in use.

✂️ Proposed fix
-    ports:
-      - "8443:8443"
-      - "8444:8444"
+    expose:
+      - "8443"
+      - "8444"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
ports:
- "8443:8443"
- "8444:8444"
expose:
- "8443"
- "8444"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docker-compose.yaml` around lines 153 - 155, Remove the host port bindings
"8443:8443" and "8444:8444" from the docker-compose service that talks to
mock-tls-inference over lightspeednet (the entries under ports currently mapping
8443 and 8444), so the service only exposes those ports internally to the
compose network; keep the internal container ports as needed for in-container
healthcheck and inter-service communication but do not publish them to the host
to avoid startup conflicts.

networks:
- lightspeednet
volumes:
- mock-tls-certs:/certs
healthcheck:
test: ["CMD", "python", "-c", "import urllib.request,ssl;c=ssl.create_default_context();c.check_hostname=False;c.verify_mode=ssl.CERT_NONE;urllib.request.urlopen('https://localhost:8443/health',context=c)"]
interval: 5s
timeout: 3s
retries: 3
start_period: 5s


volumes:
llama-storage:
mock-tls-certs:

networks:
lightspeednet:
Expand Down
21 changes: 21 additions & 0 deletions tests/e2e/configuration/library-mode/lightspeed-stack-tls.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
name: Lightspeed Core Service (LCS)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

create a version if this config also in prow env

service:
host: 0.0.0.0
port: 8080
auth_enabled: false
workers: 1
color_log: true
access_log: true
llama_stack:
use_as_library_client: true
library_client_config_path: run.yaml
user_data_collection:
feedback_enabled: true
feedback_storage: "/tmp/data/feedback"
transcripts_enabled: true
transcripts_storage: "/tmp/data/transcripts"
authentication:
module: "noop"
inference:
default_provider: tls-openai
default_model: mock-tls-model
22 changes: 22 additions & 0 deletions tests/e2e/configuration/server-mode/lightspeed-stack-tls.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
name: Lightspeed Core Service (LCS)
service:
host: 0.0.0.0
port: 8080
auth_enabled: false
workers: 1
color_log: true
access_log: true
llama_stack:
use_as_library_client: false
url: http://llama-stack:8321
api_key: xyzzy
user_data_collection:
feedback_enabled: true
feedback_storage: "/tmp/data/feedback"
transcripts_enabled: true
transcripts_storage: "/tmp/data/transcripts"
authentication:
module: "noop"
inference:
default_provider: tls-openai
default_model: mock-tls-model
9 changes: 9 additions & 0 deletions tests/e2e/features/environment.py
Original file line number Diff line number Diff line change
Expand Up @@ -552,6 +552,15 @@ def after_feature(context: Context, feature: Feature) -> None:
restart_container("lightspeed-stack")
remove_config_backup(context.default_config_backup)

# Restore Lightspeed Stack config if TLS Background step switched it
if getattr(context, "tls_config_active", False):
switch_config(context.default_config_backup)
remove_config_backup(context.default_config_backup)
if not context.is_library_mode:
restart_container("llama-stack")
restart_container("lightspeed-stack")
context.tls_config_active = False

# Clean up any proxy servers left from the last scenario
if hasattr(context, "tunnel_proxy") or hasattr(context, "interception_proxy"):
from tests.e2e.features.steps.proxy import _stop_proxy
Expand Down
226 changes: 226 additions & 0 deletions tests/e2e/features/steps/tls.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,226 @@
"""Step definitions for TLS configuration e2e tests.
These tests configure Llama Stack's run.yaml with NetworkConfig TLS settings
and verify the full pipeline works through the Lightspeed Stack.
Config switching uses the same pattern as other e2e tests: overwrite the
host-mounted run.yaml and restart Docker containers. Cleanup is handled
by a Background step that restores the backup before each scenario.
"""

import copy
import os
import shutil

import yaml
from behave import given # pyright: ignore[reportAttributeAccessIssue]
from behave.runner import Context

from tests.e2e.utils.utils import (
create_config_backup,
restart_container,
switch_config,
)

# Llama Stack config — mounted into the container from the host
_LLAMA_STACK_CONFIG = "run.yaml"
_LLAMA_STACK_CONFIG_BACKUP = "run.yaml.tls-backup"

_LIGHTSPEED_STACK_CONFIG = "lightspeed-stack.yaml"


def _load_llama_config() -> dict:
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this logic is already present in different files

"""Load the base Llama Stack run config.
Returns:
The parsed YAML configuration as a dictionary.
"""
with open(_LLAMA_STACK_CONFIG, encoding="utf-8") as f:
return yaml.safe_load(f)


def _write_config(config: dict, path: str) -> None:
"""Write a YAML config file.
Parameters:
config: The configuration dictionary to write.
path: The file path to write to.
"""
with open(path, "w", encoding="utf-8") as f:
yaml.dump(config, f, default_flow_style=False)


_TLS_PROVIDER_BASE: dict = {
"provider_id": "tls-openai",
"provider_type": "remote::openai",
"config": {
"api_key": "test-key",
"base_url": "https://mock-tls-inference:8443/v1",
"allowed_models": ["mock-tls-model"],
},
}

_TLS_MODEL_RESOURCE: dict = {
"model_id": "mock-tls-model",
"provider_id": "tls-openai",
"provider_model_id": "mock-tls-model",
}


def _ensure_tls_provider(config: dict) -> dict:
"""Find or create the tls-openai inference provider in the config.
If the provider does not exist, it is added along with the
mock-tls-model registered resource.
Parameters:
config: The Llama Stack configuration dictionary.
Returns:
The tls-openai provider configuration dictionary.
"""
providers = config.setdefault("providers", {})
inference = providers.setdefault("inference", [])

for provider in inference:
if provider.get("provider_id") == "tls-openai":
return provider

# Provider not found — add it
provider = copy.deepcopy(_TLS_PROVIDER_BASE)
inference.append(provider)

# Also register the model resource
resources = config.setdefault("registered_resources", {})
models = resources.setdefault("models", [])
if not any(m.get("model_id") == "mock-tls-model" for m in models):
models.append(copy.deepcopy(_TLS_MODEL_RESOURCE))

return provider


def _backup_llama_config() -> None:
"""Create a backup of the current run.yaml if not already backed up."""
if not os.path.exists(_LLAMA_STACK_CONFIG_BACKUP):
shutil.copy(_LLAMA_STACK_CONFIG, _LLAMA_STACK_CONFIG_BACKUP)


def _prepare_tls_provider() -> tuple[dict, dict]:
"""Back up run.yaml, load it, ensure the TLS provider exists, and init network config.
Returns:
A tuple of (full config dict, provider's network config dict).
"""
_backup_llama_config()
config = _load_llama_config()
provider = _ensure_tls_provider(config)
provider.setdefault("config", {}).setdefault("network", {})
return config, provider


# --- Background Steps ---
# Restart steps ("The original Llama Stack config is restored if modified",
# "Llama Stack is restarted", "Lightspeed Stack is restarted") are defined in
# proxy.py and shared across features by behave.


@given("Lightspeed Stack is configured for TLS testing")
def configure_lightspeed_for_tls(context: Context) -> None:
"""Switch lightspeed-stack.yaml to the TLS test configuration.
Backs up the current config and switches to the TLS variant that sets
default_provider to tls-openai and default_model to mock-tls-model.
The backup is restored in after_scenario via the shared restore step.
Parameters:
context: Behave test context.
"""
mode_dir = "library-mode" if context.is_library_mode else "server-mode"
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this will never work in prow/konflux environment

tls_config = f"tests/e2e/configuration/{mode_dir}/lightspeed-stack-tls.yaml"

if not hasattr(context, "default_config_backup"):
context.default_config_backup = create_config_backup(_LIGHTSPEED_STACK_CONFIG)

switch_config(tls_config)
restart_container("lightspeed-stack")
context.tls_config_active = True


# --- TLS Configuration Steps ---


@given("Llama Stack is configured with TLS verification disabled")
def configure_tls_verify_false(context: Context) -> None:
"""Configure run.yaml with TLS verify: false.
Parameters:
context: Behave test context.
"""
config, provider = _prepare_tls_provider()
provider["config"]["network"]["tls"] = {"verify": False}
_write_config(config, _LLAMA_STACK_CONFIG)


@given("Llama Stack is configured with CA certificate verification")
def configure_tls_verify_ca(context: Context) -> None:
"""Configure run.yaml with TLS verify: /certs/ca.crt.
Parameters:
context: Behave test context.
"""
config, provider = _prepare_tls_provider()
provider["config"]["network"]["tls"] = {
"verify": "/certs/ca.crt",
"min_version": "TLSv1.2",
}
_write_config(config, _LLAMA_STACK_CONFIG)


@given("Llama Stack is configured with TLS verification enabled")
def configure_tls_verify_true(context: Context) -> None:
"""Configure run.yaml with TLS verify: true.
This should fail when connecting to a self-signed certificate server.
Parameters:
context: Behave test context.
"""
config, provider = _prepare_tls_provider()
provider["config"]["network"]["tls"] = {"verify": True}
_write_config(config, _LLAMA_STACK_CONFIG)


@given("Llama Stack is configured with mutual TLS authentication")
def configure_tls_mtls(context: Context) -> None:
"""Configure run.yaml with mutual TLS (client cert and key).
Parameters:
context: Behave test context.
"""
config, provider = _prepare_tls_provider()

# Update base_url to use the mTLS server port
provider["config"]["base_url"] = "https://mock-tls-inference:8444/v1"

provider["config"]["network"]["tls"] = {
"verify": "/certs/ca.crt",
"client_cert": "/certs/client.crt",
"client_key": "/certs/client.key",
}
_write_config(config, _LLAMA_STACK_CONFIG)


@given('Llama Stack is configured with TLS minimum version "{version}"')
def configure_tls_min_version(context: Context, version: str) -> None:
"""Configure run.yaml with TLS minimum version.
Parameters:
context: Behave test context.
version: The TLS version (e.g., "TLSv1.2", "TLSv1.3").
"""
config, provider = _prepare_tls_provider()
provider["config"]["network"]["tls"] = {
"verify": "/certs/ca.crt",
"min_version": version,
}
_write_config(config, _LLAMA_STACK_CONFIG)
Loading
Loading