feat: migrate pages dev server to Argo#4
Conversation
📝 WalkthroughWalkthroughAdds shared Docker abstractions and Argo/port utilities under Changes
Sequence Diagram(s)sequenceDiagram
participant CLI as CLI
participant Exec as Executor/Pipeline
participant Argo as Argo Adapter
participant Workspace as Local Workspace
participant Watcher as CopyWatcher (subproc)
participant HotReload as HotReload (subproc)
CLI->>Exec: start_local_dev_server(page, ...)
Exec->>Argo: validate Argo image & ensure Argo running
Exec->>Workspace: compute workspace_key & create workspace
Exec->>Workspace: render index.html -> write workspace/index.html
Exec->>Argo: register local_file route (/pages/{workspace_key}/ -> workspace)
Exec->>Watcher: start copy_watcher subprocess
Exec->>HotReload: start hot_reload subprocess (SSE)
Watcher-->>Workspace: copy tracked file changes
HotReload-->>CLI: SSE reload events -> browser
CLI->>Exec: stop_local_dev_server(...)
Exec->>Argo: deregister route
Exec->>Watcher: stop copy_watcher
Exec->>HotReload: stop hot_reload
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> docs: update architecture and modules docs for Argo migration Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> test: add tests for Argo migration and copy-watcher architecture Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 7
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
cli/pages/tests/test_executor.py (1)
259-296:⚠️ Potential issue | 🟡 MinorTest is missing steps from the actual
start_local_dev_serverpipeline.Comparing with the executor implementation (context snippet 2), the test's
mock_step_classeslist is missing:
CleanOrphanedPagesStepDeregisterPageFromArgoStepThe test expects these steps to be instantiated, but they're not in the mock list, so the assertion at line 296 won't verify they were called.
Proposed fix
mock_step_classes = [ "ValidatePageDirectoryStep", "ReadPageMetadataStep", "ValidatePageStructureStep", "GetClientStep", "GetContainerManagerStep", "GetPageNameStep", "GetWorkspaceKeyStep", "ValidatePageNotRunningStep", "GetNetworkStep", "GetArgoImageNameStep", "ValidateArgoImageStep", "EnsureArgoRunningStep", + "CleanOrphanedPagesStep", # NEW "CreateWorkspaceStep", # NEW "CopyTrackedFilesStep", # NEW "FindHotReloadPortStep", "RenderIndexHtmlStep", + "DeregisterPageFromArgoStep", # NEW "RegisterPageInArgoStep", "StartCopyWatcherStep", # NEW "StartHotReloadSubprocessStep", "StoreHotReloadPortStep", "PrintPageUrlStep", ]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/pages/tests/test_executor.py` around lines 259 - 296, The test test_start_page_pipeline_steps in test_executor.py is missing two pipeline step mocks used by start_local_dev_server: add "CleanOrphanedPagesStep" and "DeregisterPageFromArgoStep" to the mock_step_classes list so the test will instantiate and assert those steps (the test sets attributes on mock_pipelines and asserts each was called; update the mock_step_classes array used in that test to include these two names so getattr(mock_pipelines, ...) and assert_called_once() cover them).
🧹 Nitpick comments (15)
cli/pages/helpers.py (2)
192-206: Consider extracting the hardcoded port to settings.The port
8044is hardcoded in two places for"subdomain"and"path"routing modes. This could be a candidate forPagesSettingsto improve maintainability and avoid magic numbers.♻️ Suggested refactor
Add to
PagesSettingsincli/settings.py:FLASK_EXTERNAL_PORT: int = 8044Then update
generate_page_url:def generate_page_url(page_name, routing_mode, container=None): sanitized = page_name.replace(" ", "-") if routing_mode == "subdomain": - flask_port = 8044 + flask_port = settings.PAGES.FLASK_EXTERNAL_PORT return f"http://{sanitized}.localhost:{flask_port}/" if routing_mode == "path": - flask_port = 8044 + flask_port = settings.PAGES.FLASK_EXTERNAL_PORT return f"http://localhost:{flask_port}/{sanitized}"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/pages/helpers.py` around lines 192 - 206, generate_page_url uses the magic number 8044 for the Flask external port in the "subdomain" and "path" branches; update this to read a setting from PagesSettings (e.g., FLASK_EXTERNAL_PORT) in cli/settings.py instead of hardcoding, and keep extract_port_from_container usage for the "port" routing_mode unchanged; modify generate_page_url to import PagesSettings (or accept settings) and use PagesSettings.FLASK_EXTERNAL_PORT where flask_port is currently set so the port is centralized and configurable.
168-173: Consider extracting the container name prefix to settings.The hardcoded
"page-"prefix replaces the previously settings-derived value. While this works, consider adding it toPagesSettingsfor consistency with other configurable prefixes.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/pages/helpers.py` around lines 168 - 173, The hardcoded "page-" prefix in get_page_container should be moved into PagesSettings for configurability: add a new setting (e.g., page_container_prefix or page_prefix) on PagesSettings, import/instantiate PagesSettings where get_page_container is defined, and replace the literal "page-" with f"{PagesSettings.<new_prefix>}"; keep the same slugging logic (page_name.replace(' ', '-')) and ensure any callers still receive the same container_name semantics. Also update any tests or docs that assumed the literal "page-" if present.cli/functions/helpers.py (1)
300-301: Consider moving imports to module level.The imports of
ARGO_API_BASE_PATHandHOST_BINDinside the function body are unconventional. Unless there's a circular import issue, these should be moved to the top of the file with the other imports for better readability and to avoid repeated import overhead on each function call.♻️ Suggested refactor
from cli.commons.enums import MessageColorEnum from cli.commons.helpers import argo_container_manager # noqa: F401 (re-exported for existing callers) from cli.commons.helpers import find_available_ports from cli.commons.helpers import is_port_available from cli.commons.helpers import verify_and_fetch_images # noqa: F401 (re-exported for existing callers) +from cli.commons.settings import ARGO_API_BASE_PATH +from cli.commons.settings import HOST_BIND from cli.functions.engines.docker.client import FunctionDockerClientThen remove lines 300-301 from the function body.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/functions/helpers.py` around lines 300 - 301, Move the in-function imports of ARGO_API_BASE_PATH and HOST_BIND to the module-level imports at the top of cli/functions/helpers.py so they are imported alongside the other top-level imports; then remove the now-redundant import lines from the function body (the lines importing ARGO_API_BASE_PATH and HOST_BIND) to avoid repeated imports and improve readability. Ensure any references inside functions continue to use the same names ARGO_API_BASE_PATH and HOST_BIND.cli/pages/engines/templates/hot_reload_server.py (3)
49-50: Parameterformatshadows Python builtin.The
formatparameter name shadows the built-informat()function. While this is a common pattern when overridingBaseHTTPRequestHandler.log_message, renaming it improves clarity.♻️ Suggested rename
- def log_message(self, format, *args): + def log_message(self, fmt, *args): pass # Suppress default request logging🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/pages/engines/templates/hot_reload_server.py` around lines 49 - 50, The log_message override currently uses a parameter named format which shadows the built-in format() — rename that parameter (e.g., to fmt or message_format) in the log_message method signature and update any uses in the method body (function log_message in the hot_reload_server template) so it no longer conflicts with the built-in name while preserving the same behavior (still suppressing logging).
87-93: SSE client cleanup may miss some disconnect scenarios.The try/except only catches
BrokenPipeErrorandConnectionResetError. Other socket-related exceptions (e.g.,OSError,ConnectionAbortedError) could leave stale entries in_sse_clients. However,_push_reload()cleans up dead clients on the next broadcast, making this self-healing. Acceptable for dev tooling.♻️ Optional: broaden exception handling
try: while True: time.sleep(1) - except (BrokenPipeError, ConnectionResetError): + except (BrokenPipeError, ConnectionResetError, OSError): with _lock: if self.wfile in _sse_clients: _sse_clients.remove(self.wfile)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/pages/engines/templates/hot_reload_server.py` around lines 87 - 93, The SSE client cleanup currently only catches BrokenPipeError and ConnectionResetError; broaden the exception handling in the try/except around the sleep loop to also catch other socket-related errors (e.g., ConnectionAbortedError and OSError) so self.wfile is removed from _sse_clients under more disconnect scenarios, ensuring the cleanup inside the with _lock: block (which checks self.wfile and removes it from _sse_clients) runs on those failures; mention or reference the existing _push_reload() cleanup as a fallback but implement the expanded exception clause in the same function handling the SSE connection.
106-124: Consider logging malformed error payloads instead of silently ignoring.When
_capture_errorreceives invalid JSON, the exception is silently swallowed. For debugging client-side issues, logging malformed payloads would help diagnose integration problems.♻️ Optional: log decode failures
try: error = json.loads(body) with _lock: _errors.append(error) if len(_errors) > _MAX_ERRORS: _errors.pop(0) print( f"[browser error] {error.get('message', '')} " f"({error.get('source', '')}:{error.get('line', '')})", flush=True, ) - except (json.JSONDecodeError, Exception): - pass + except json.JSONDecodeError as exc: + print(f"[hot-reload] malformed error payload: {exc}", flush=True)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/pages/engines/templates/hot_reload_server.py` around lines 106 - 124, The except block in _capture_error currently swallows all JSON decode errors; update it to log malformed payloads for debugging by capturing the raw body (already read into body) and printing a clear message including the body and the exception details inside the except (e.g., print(f"[browser error] malformed payload: {body!r} error={e}")), while preserving the existing behavior of appending to _errors under _lock, keeping _MAX_ERRORS trimming, and still calling send_response(200), _cors(), and end_headers(); reference _capture_error, _errors, _MAX_ERRORS, and _lock when making the change.cli/pages/tests/engines/test_helpers.py (1)
169-170: Duplicate imports inside test function.
MagicMockandpatchare already imported at the module level (lines 3-4). These local imports are redundant.♻️ Remove duplicate imports
def test_render_index_html_new_signature_writes_to_workspace(tmp_path): source = tmp_path / "source" workspace = tmp_path / "workspace" source.mkdir() workspace.mkdir() - from unittest.mock import MagicMock - from unittest.mock import patch - mock_metadata = MagicMock()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/pages/tests/engines/test_helpers.py` around lines 169 - 170, Remove the redundant local imports of MagicMock and patch inside the test function: keep using the module-level imports already defined at the top of cli/pages/tests/engines/test_helpers.py and delete the lines "from unittest.mock import MagicMock" and "from unittest.mock import patch" so the test continues to reference MagicMock and patch from the existing module-level imports.cli/pages/tests/test_pipelines.py (1)
163-166: Clarify test intent with explicitos.killmock behavior.The comment says "Simulate process exists" but
patch("os.kill")withoutside_effectjust makesos.killreturnNone(default MagicMock behavior). While this works (no exception = process exists), making it explicit improves test readability.♻️ Optional: explicit mock return
- with patch("os.kill"), self.assertRaises( + with patch("os.kill", return_value=None), self.assertRaises( PageIsAlreadyRunningError ): # Simulate process exists🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/pages/tests/test_pipelines.py` around lines 163 - 166, The test currently patches os.kill without specifying behavior; make the intention explicit by patching os.kill to return None (e.g., patch("os.kill", return_value=None)) so it's clear we're simulating an existing process, then assert PageIsAlreadyRunningError is raised when calling step.execute(data); reference the os.kill patch, the step.execute call, and the PageIsAlreadyRunningError in the test update.cli/pages/pipelines/dev_engine.py (3)
332-345: Silent exception swallowing inCopyTrackedFilesStephides copy failures.Catching
(OSError, ValueError)and passing silently means file copy failures go unnoticed. Consider logging at debug level or collecting failures for summary reporting.♻️ Log copy failures
+import logging + +logger = logging.getLogger(__name__) + class CopyTrackedFilesStep(PipelineStep): def execute(self, data): source_dir = data["project_path"] workspace_dir = data["workspace_path"] tracked = get_tracked_files(source_dir) for src_file in tracked: try: rel = src_file.relative_to(source_dir) dst = workspace_dir / rel dst.parent.mkdir(parents=True, exist_ok=True) shutil.copy2(src_file, dst) except (OSError, ValueError): - pass + logger.debug("Failed to copy %s", src_file, exc_info=True) return data🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/pages/pipelines/dev_engine.py` around lines 332 - 345, The CopyTrackedFilesStep.execute currently swallows OSError/ValueError which hides copy failures; update execute (and keep using get_tracked_files and the tracked loop) to catch exceptions but record or log them instead of pass: log the src_file and exception at debug or error level via the existing logger (or add one) and/or append failures to a failures list on data (e.g., data["copy_failures"]) so callers can summarize; ensure dst.parent.mkdir and shutil.copy2 remain inside the try but do not re-raise, and include the file path and exception message in the log/record for each failure.
348-368: Copy watcher subprocess stderr/stdout sent to DEVNULL loses debugging information.Unlike
StartHotReloadSubprocessStepwhich logs to.hot_reload.log, the copy watcher sends output toDEVNULL. The watcher itself logs to.copy_watcher.login the workspace (via_setup_logging), so this is acceptable. However, any Python-level exceptions before logging setup would be lost.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/pages/pipelines/dev_engine.py` around lines 348 - 368, StartCopyWatcherStep currently discards subprocess stdout/stderr to DEVNULL which hides Python-level exceptions before the watcher's own logging starts; change StartCopyWatcherStep.execute to open the workspace log file (data["workspace_path"] / ".copy_watcher.log") in append binary mode and pass that file handle for both stdout and stderr when creating the subprocess (using the same copy_watcher_script and args), keeping start_new_session=True, and still write the PID to .watcher.pid; this ensures early errors are captured in .copy_watcher.log while preserving existing behavior.
236-253: Log file handle left open during subprocess lifetime.The
with open(log_file, "w") as log:context manager closes the file handle afterPopenreturns, but the subprocess continues writing to stdout/stderr which are still connected to the file descriptor. This works because file descriptors are duplicated to the subprocess, but the parent's handle is closed prematurely. This is actually fine for the use case, but consider opening without context manager for clarity.♻️ Alternative: open without context manager
- log_file = data["workspace_path"] / ".hot_reload.log" - with open(log_file, "w") as log: - proc = subprocess.Popen( - [ - sys.executable, - str(hot_reload_script), - "--page-workspace", - str(data["workspace_path"]), - "--port", - str(data["hot_reload_port"]), - ], - stdout=log, - stderr=log, - start_new_session=True, - ) + log_file = data["workspace_path"] / ".hot_reload.log" + log_handle = open(log_file, "w") # noqa: SIM115 + proc = subprocess.Popen( + [ + sys.executable, + str(hot_reload_script), + "--page-workspace", + str(data["workspace_path"]), + "--port", + str(data["hot_reload_port"]), + ], + stdout=log_handle, + stderr=log_handle, + start_new_session=True, + ) + log_handle.close() # Safe: subprocess has its own fd copyActually, the current pattern is fine —
Popenduplicates the fd beforewithexits. No change needed.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/pages/pipelines/dev_engine.py` around lines 236 - 253, The log file is opened with a context manager while launching the hot-reload subprocess, which closes the parent's file handle immediately; change to open the file without a with-statement so the handle remains open for the subprocess lifetime: replace the with open(log_file, "w") as log: block by assigning log = open(log_file, "w"), pass that log into subprocess.Popen (the existing Popen call), and ensure you keep or store the handle (e.g., attach to data like data["hot_reload_log"]) so it can be closed later when data["hot_reload_process"] is terminated; keep the pid_file write_text and data["hot_reload_process"] assignments unchanged.cli/pages/tests/test_dev_engine_steps.py (1)
150-167: Test writes.pidto wrong directory forStopHotReloadSubprocessStep.The test creates
workspace / ".pid"(line 153-154) but then callsStopHotReloadSubprocessStep().execute(make_data(project_path=workspace)). Looking at the implementation (Context snippet 1, line 250),StartHotReloadSubprocessStepwrites.pidtodata["project_path"], which is the source directory, not the workspace. The test should be consistent with this behavior.However, examining this test more closely, it appears to be testing
StopHotReloadSubprocessStepwhich reads fromdata["project_path"] / ".pid". Ifproject_path=workspaceis passed, then the.pidplacement is correct for this test. The naming (workspace) is just confusing — it's actually used as theproject_path.♻️ Rename variable for clarity
def test_stop_hot_reload_subprocess_waits_for_exit(tmp_path): - workspace = tmp_path / "my-page-abc12345" - workspace.mkdir() - pid_file = workspace / ".pid" + source = tmp_path / "my-page-abc12345" + source.mkdir() + pid_file = source / ".pid" pid_file.write_text("99999") call_count = [0] @@ -163,7 +163,7 @@ raise ProcessLookupError with patch("os.kill", side_effect=fake_kill): - StopHotReloadSubprocessStep().execute(make_data(project_path=workspace)) + StopHotReloadSubprocessStep().execute(make_data(project_path=source))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/pages/tests/test_dev_engine_steps.py` around lines 150 - 167, The test's variable name "workspace" is misleading because StopHotReloadSubprocessStep reads the .pid from data["project_path"]; rename the test variable to reflect that it's the project directory (e.g., project_path or project_dir) and update any comments to clarify that pid_file = project_path / ".pid" is the intended placement; ensure the call remains StopHotReloadSubprocessStep().execute(make_data(project_path=project_path)) and that assertions about pid removal and os.kill call_count still pass.cli/commons/engines/docker/container.py (1)
66-75: Container removal instart()doesn't wait for removal completion.When removing a stopped container before starting a new one,
existing.remove()is called but there's no wait/force flag. If the container has volumes or is in a transitional state, removal might fail or be slow.♻️ Consider force removal
with suppress(NotFound): existing = self.client.containers.get(container_name) if existing.status == "running": raise ContainerAlreadyRunningException(container_name=container_name) - existing.remove() + existing.remove(force=True)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/commons/engines/docker/container.py` around lines 66 - 75, In start(), the code calls existing.remove() without forcing or waiting which can hang or fail for containers with volumes or in transitional states; update the removal to use existing.remove(force=True, v=True) (or appropriate kwargs) and then wait/poll until the container is gone (e.g., loop and call client.containers.get(container_name) or existing.reload() catching NotFound) before calling client.containers.run; keep the existing exception handling (ContainerAlreadyRunningException, ContainerExecutionException) unchanged.cli/commons/helpers.py (1)
130-140: Redundantvalidate_engine_installed()call inside the loop.
validate_engine_installed()is called once per image, but the Docker engine availability doesn't change between iterations. Moving this check outside the loop would be more efficient.Proposed fix
def verify_and_fetch_images(client: Any, image_names: list[str]) -> None: """Pull Docker/Podman images, raising on failure. Works on any client implementing get_validator() and get_downloader(). Exceptions from validate_engine_installed() and pull_image() propagate as-is. """ validator = client.get_validator() + validator.validate_engine_installed() + downloader = client.get_downloader() for image_name in image_names: - validator.validate_engine_installed() - downloader = client.get_downloader() downloader.pull_image(image_name=image_name)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/commons/helpers.py` around lines 130 - 140, The function verify_and_fetch_images currently calls validator.validate_engine_installed() on every iteration; move the engine validation out of the loop by calling client.get_validator() once, invoking validator.validate_engine_installed() before the for loop, then iterate image_names and call downloader.pull_image(image_name=...) for each; you can also obtain downloader = client.get_downloader() once before the loop to avoid repeated calls to client.get_downloader(), preserving the same exceptions/behavior if validation or pull fails.cli/pages/engines/helpers.py (1)
178-200: Consider improving the retry logic with proper backoff and logging.The current implementation has a fixed 1-second sleep before the first request (purpose unclear) and a single retry after 2 seconds on any failure. This could be improved:
- The initial
time.sleep(1)appears to be a timing workaround—consider documenting why this delay is needed.- The retry doesn't distinguish between transient failures (worth retrying) and permanent errors (e.g., 400 Bad Request).
- If both attempts fail, only the second exception propagates, losing context about the first failure.
Suggested improvement
def register_page_in_argo(workspace_key: str, argo_admin_port: int) -> None: """Register a local_file route for workspace_key in Argo.""" - time.sleep(1) + # Brief delay to allow Argo container to fully initialize routes after startup + time.sleep(1) payload = { # ... payload unchanged ... } url = f"http://localhost:{argo_admin_port}/{ARGO_API_BASE_PATH}/" - response = httpx.post(url, json=payload, timeout=10.0) - if not response.is_success: - time.sleep(2) - response = httpx.post(url, json=payload, timeout=10.0) - response.raise_for_status() + for attempt in range(2): + response = httpx.post(url, json=payload, timeout=10.0) + if response.is_success: + return + if attempt == 0: + time.sleep(2) + response.raise_for_status()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/pages/engines/helpers.py` around lines 178 - 200, The register_page_in_argo function uses a blunt sleep and a single blind retry; change it to perform a small documented initial delay only if needed, then attempt the POST with an exponential backoff retry loop (e.g., 3 attempts with increasing delays) that retries only on transient errors (network exceptions or 5xx responses) and does not retry on 4xx client errors; log each attempt and its response/error via the same logger (including response.status_code and response.text when available) and if all attempts fail raise an exception that preserves both the last error and the first failure context so debugging shows both failures.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@cli/commons/helpers.py`:
- Around line 39-51: The function find_available_ports currently can return
fewer ports than requested without signaling failure; update it so after filling
from the start_range..end_range loop you check if len(available) != len(ports)
and if so raise a ValueError (or a custom exception) describing the requested
count and the found ports; modify the function (find_available_ports) to perform
this final length check and raise a clear error (including requested ports,
start_range/end_range, and the available list) instead of returning a shorter
list.
In `@cli/pages/engines/docker/container.py`:
- Around line 14-19: The stop method currently suppresses ContainerNotFoundError
and all Exception which hides real errors; change the suppression to only catch
ContainerNotFoundError (and any specific Docker client exception type you
expect, e.g., DockerException) and handle/log unexpected errors instead of
swallowing them: call self.get(label=label) and if container.status == "running"
call container.stop() inside a try that specifically catches
ContainerNotFoundError to ignore missing containers, catch the Docker-specific
exception type to log a warning, and let other exceptions propagate (or log them
via your logger) so real failures are visible; update references to stop, get,
ContainerNotFoundError, and container.stop accordingly.
In `@cli/pages/engines/docker/validators.py`:
- Line 20: The EngineNotInstalledException is being raised with the enum object
self.engine (a PageEngineTypeEnum) which yields noisy output; update the raise
to pass a user-friendly string by using the enum's value (e.g.,
self.engine.value or str(self.engine.value)) instead of self.engine so
EngineNotInstalledException receives a plain engine name; change the raise in
validators.py where EngineNotInstalledException(engine=self.engine) is used to
EngineNotInstalledException(engine=self.engine.value) (or equivalent string
conversion) so the exception message is clean.
In `@cli/pages/engines/helpers.py`:
- Around line 76-103: The try/except that calls
DashboardPageModel.load_from_project silently swallows all errors, hiding
manifest parsing issues; modify the except to log the caught Exception at debug
level (include the exception message/traceback) while preserving current
behavior (do not re-raise), so callers of this block that build `tracked` from
`model.js_libraries`, `model.css_libraries`, `model.link_libraries`, and
`model.static_paths` still behave the same but developers can diagnose failures;
reference DashboardPageModel.load_from_project, the except block around it, and
the `tracked`/`source_dir` processing when adding the debug log.
In `@cli/pages/engines/templates/copy_watcher.py`:
- Around line 111-124: The reassignment tracked_set = {str(f) for f in
new_tracked} inside the loop shadows the outer tracked_set and doesn't update it
for subsequent iterations; replace the reassignment with mutating the existing
set (e.g., clear() then update() with str(f) for f in new_tracked) so the outer
tracked_set used by the path_str in tracked_set check is actually updated after
_copy_all returns, and ensure this change is applied in the same block where
new_tracked is assigned (around the manifest.toml handling that calls _copy_all
and _render).
In `@cli/pages/handlers.py`:
- Around line 34-40: The update_page function currently builds an
UpdatePagePayload dict and calls client.patch even when no fields are set; add a
guard after constructing data (referencing UpdatePagePayload and the local
variable data in update_page) that checks if data is empty and raises a clear
ValueError (or returns an error) instead of making the HTTP call, so
client.patch(url, headers=headers, json=data) is only invoked when data contains
at least one field.
In `@docs/development/modules.md`:
- Around line 227-228: The docs reference outdated Argo images; update both
occurrences of "ubidots/functions-argo:2.0.1" and "ubidots/functions-argo:2.0.2"
to match the codebase ARGO_VERSION by replacing them with
"ubidots/functions-argo:2.1.0" so the Pages/Functions docs align with the
ARGO_IMAGE_NAME/ARGO_VERSION constant defined in cli/commons/settings.py.
---
Outside diff comments:
In `@cli/pages/tests/test_executor.py`:
- Around line 259-296: The test test_start_page_pipeline_steps in
test_executor.py is missing two pipeline step mocks used by
start_local_dev_server: add "CleanOrphanedPagesStep" and
"DeregisterPageFromArgoStep" to the mock_step_classes list so the test will
instantiate and assert those steps (the test sets attributes on mock_pipelines
and asserts each was called; update the mock_step_classes array used in that
test to include these two names so getattr(mock_pipelines, ...) and
assert_called_once() cover them).
---
Nitpick comments:
In `@cli/commons/engines/docker/container.py`:
- Around line 66-75: In start(), the code calls existing.remove() without
forcing or waiting which can hang or fail for containers with volumes or in
transitional states; update the removal to use existing.remove(force=True,
v=True) (or appropriate kwargs) and then wait/poll until the container is gone
(e.g., loop and call client.containers.get(container_name) or existing.reload()
catching NotFound) before calling client.containers.run; keep the existing
exception handling (ContainerAlreadyRunningException,
ContainerExecutionException) unchanged.
In `@cli/commons/helpers.py`:
- Around line 130-140: The function verify_and_fetch_images currently calls
validator.validate_engine_installed() on every iteration; move the engine
validation out of the loop by calling client.get_validator() once, invoking
validator.validate_engine_installed() before the for loop, then iterate
image_names and call downloader.pull_image(image_name=...) for each; you can
also obtain downloader = client.get_downloader() once before the loop to avoid
repeated calls to client.get_downloader(), preserving the same
exceptions/behavior if validation or pull fails.
In `@cli/functions/helpers.py`:
- Around line 300-301: Move the in-function imports of ARGO_API_BASE_PATH and
HOST_BIND to the module-level imports at the top of cli/functions/helpers.py so
they are imported alongside the other top-level imports; then remove the
now-redundant import lines from the function body (the lines importing
ARGO_API_BASE_PATH and HOST_BIND) to avoid repeated imports and improve
readability. Ensure any references inside functions continue to use the same
names ARGO_API_BASE_PATH and HOST_BIND.
In `@cli/pages/engines/helpers.py`:
- Around line 178-200: The register_page_in_argo function uses a blunt sleep and
a single blind retry; change it to perform a small documented initial delay only
if needed, then attempt the POST with an exponential backoff retry loop (e.g., 3
attempts with increasing delays) that retries only on transient errors (network
exceptions or 5xx responses) and does not retry on 4xx client errors; log each
attempt and its response/error via the same logger (including
response.status_code and response.text when available) and if all attempts fail
raise an exception that preserves both the last error and the first failure
context so debugging shows both failures.
In `@cli/pages/engines/templates/hot_reload_server.py`:
- Around line 49-50: The log_message override currently uses a parameter named
format which shadows the built-in format() — rename that parameter (e.g., to fmt
or message_format) in the log_message method signature and update any uses in
the method body (function log_message in the hot_reload_server template) so it
no longer conflicts with the built-in name while preserving the same behavior
(still suppressing logging).
- Around line 87-93: The SSE client cleanup currently only catches
BrokenPipeError and ConnectionResetError; broaden the exception handling in the
try/except around the sleep loop to also catch other socket-related errors
(e.g., ConnectionAbortedError and OSError) so self.wfile is removed from
_sse_clients under more disconnect scenarios, ensuring the cleanup inside the
with _lock: block (which checks self.wfile and removes it from _sse_clients)
runs on those failures; mention or reference the existing _push_reload() cleanup
as a fallback but implement the expanded exception clause in the same function
handling the SSE connection.
- Around line 106-124: The except block in _capture_error currently swallows all
JSON decode errors; update it to log malformed payloads for debugging by
capturing the raw body (already read into body) and printing a clear message
including the body and the exception details inside the except (e.g.,
print(f"[browser error] malformed payload: {body!r} error={e}")), while
preserving the existing behavior of appending to _errors under _lock, keeping
_MAX_ERRORS trimming, and still calling send_response(200), _cors(), and
end_headers(); reference _capture_error, _errors, _MAX_ERRORS, and _lock when
making the change.
In `@cli/pages/helpers.py`:
- Around line 192-206: generate_page_url uses the magic number 8044 for the
Flask external port in the "subdomain" and "path" branches; update this to read
a setting from PagesSettings (e.g., FLASK_EXTERNAL_PORT) in cli/settings.py
instead of hardcoding, and keep extract_port_from_container usage for the "port"
routing_mode unchanged; modify generate_page_url to import PagesSettings (or
accept settings) and use PagesSettings.FLASK_EXTERNAL_PORT where flask_port is
currently set so the port is centralized and configurable.
- Around line 168-173: The hardcoded "page-" prefix in get_page_container should
be moved into PagesSettings for configurability: add a new setting (e.g.,
page_container_prefix or page_prefix) on PagesSettings, import/instantiate
PagesSettings where get_page_container is defined, and replace the literal
"page-" with f"{PagesSettings.<new_prefix>}"; keep the same slugging logic
(page_name.replace(' ', '-')) and ensure any callers still receive the same
container_name semantics. Also update any tests or docs that assumed the literal
"page-" if present.
In `@cli/pages/pipelines/dev_engine.py`:
- Around line 332-345: The CopyTrackedFilesStep.execute currently swallows
OSError/ValueError which hides copy failures; update execute (and keep using
get_tracked_files and the tracked loop) to catch exceptions but record or log
them instead of pass: log the src_file and exception at debug or error level via
the existing logger (or add one) and/or append failures to a failures list on
data (e.g., data["copy_failures"]) so callers can summarize; ensure
dst.parent.mkdir and shutil.copy2 remain inside the try but do not re-raise, and
include the file path and exception message in the log/record for each failure.
- Around line 348-368: StartCopyWatcherStep currently discards subprocess
stdout/stderr to DEVNULL which hides Python-level exceptions before the
watcher's own logging starts; change StartCopyWatcherStep.execute to open the
workspace log file (data["workspace_path"] / ".copy_watcher.log") in append
binary mode and pass that file handle for both stdout and stderr when creating
the subprocess (using the same copy_watcher_script and args), keeping
start_new_session=True, and still write the PID to .watcher.pid; this ensures
early errors are captured in .copy_watcher.log while preserving existing
behavior.
- Around line 236-253: The log file is opened with a context manager while
launching the hot-reload subprocess, which closes the parent's file handle
immediately; change to open the file without a with-statement so the handle
remains open for the subprocess lifetime: replace the with open(log_file, "w")
as log: block by assigning log = open(log_file, "w"), pass that log into
subprocess.Popen (the existing Popen call), and ensure you keep or store the
handle (e.g., attach to data like data["hot_reload_log"]) so it can be closed
later when data["hot_reload_process"] is terminated; keep the pid_file
write_text and data["hot_reload_process"] assignments unchanged.
In `@cli/pages/tests/engines/test_helpers.py`:
- Around line 169-170: Remove the redundant local imports of MagicMock and patch
inside the test function: keep using the module-level imports already defined at
the top of cli/pages/tests/engines/test_helpers.py and delete the lines "from
unittest.mock import MagicMock" and "from unittest.mock import patch" so the
test continues to reference MagicMock and patch from the existing module-level
imports.
In `@cli/pages/tests/test_dev_engine_steps.py`:
- Around line 150-167: The test's variable name "workspace" is misleading
because StopHotReloadSubprocessStep reads the .pid from data["project_path"];
rename the test variable to reflect that it's the project directory (e.g.,
project_path or project_dir) and update any comments to clarify that pid_file =
project_path / ".pid" is the intended placement; ensure the call remains
StopHotReloadSubprocessStep().execute(make_data(project_path=project_path)) and
that assertions about pid removal and os.kill call_count still pass.
In `@cli/pages/tests/test_pipelines.py`:
- Around line 163-166: The test currently patches os.kill without specifying
behavior; make the intention explicit by patching os.kill to return None (e.g.,
patch("os.kill", return_value=None)) so it's clear we're simulating an existing
process, then assert PageIsAlreadyRunningError is raised when calling
step.execute(data); reference the os.kill patch, the step.execute call, and the
PageIsAlreadyRunningError in the test update.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: c79959b3-0697-4520-9fe4-f4b1d5525d4b
📒 Files selected for processing (59)
cli/commons/engines/__init__.pycli/commons/engines/docker/__init__.pycli/commons/engines/docker/client.pycli/commons/engines/docker/container.pycli/commons/engines/docker/network.pycli/commons/exceptions.pycli/commons/helpers.pycli/commons/helpers_tests.pycli/commons/settings.pycli/functions/commands.pycli/functions/engines/docker/client.pycli/functions/engines/docker/container.pycli/functions/engines/docker/network.pycli/functions/engines/exceptions.pycli/functions/engines/models.pycli/functions/engines/settings.pycli/functions/engines/tests/test_models.pycli/functions/helpers.pycli/functions/pipelines.pycli/functions/tests/test_utils.pycli/pages/commands.pycli/pages/engines/abstracts/__init__.pycli/pages/engines/abstracts/client.pycli/pages/engines/abstracts/container.pycli/pages/engines/abstracts/network.pycli/pages/engines/docker/Dockerfilecli/pages/engines/docker/README.mdcli/pages/engines/docker/build_image.pycli/pages/engines/docker/client.pycli/pages/engines/docker/container.pycli/pages/engines/docker/image.pycli/pages/engines/docker/network.pycli/pages/engines/docker/validators.pycli/pages/engines/exceptions.pycli/pages/engines/helpers.pycli/pages/engines/settings.pycli/pages/engines/templates/copy_watcher.pycli/pages/engines/templates/flask_manager.pycli/pages/engines/templates/hot_reload_server.pycli/pages/engines/templates/page_server.pycli/pages/executor.pycli/pages/handlers.pycli/pages/helpers.pycli/pages/models.pycli/pages/pipelines/__init__.pycli/pages/pipelines/cloud_crud.pycli/pages/pipelines/dev_engine.pycli/pages/pipelines/dev_scaffold.pycli/pages/pipelines/sync.pycli/pages/tests/engines/test_helpers.pycli/pages/tests/test_dev_commands.pycli/pages/tests/test_dev_engine_steps.pycli/pages/tests/test_dev_scaffold_workspace.pycli/pages/tests/test_executor.pycli/pages/tests/test_pipelines.pycli/pages/tests/test_pull_workspace.pycli/settings.pydocs/development/architecture.mddocs/development/modules.md
💤 Files with no reviewable changes (9)
- cli/pages/engines/abstracts/init.py
- cli/pages/engines/docker/README.md
- cli/pages/engines/abstracts/client.py
- cli/pages/engines/abstracts/network.py
- cli/pages/engines/docker/build_image.py
- cli/pages/engines/docker/Dockerfile
- cli/pages/engines/abstracts/container.py
- cli/pages/engines/templates/flask_manager.py
- cli/pages/engines/templates/page_server.py
ea1853c to
935aff8
Compare
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
cli/pages/tests/test_executor.py (1)
259-296:⚠️ Potential issue | 🟡 MinorTest step list is missing
CleanOrphanedPagesStepandDeregisterPageFromArgoStep.Comparing with
cli/pages/executor.py:start_local_dev_server(context snippet 1), the test'smock_step_classeslist is missingCleanOrphanedPagesStep(line 63 in executor) andDeregisterPageFromArgoStep(line 68). This will cause the test to not verify these steps are instantiated.🐛 Proposed fix
"EnsureArgoRunningStep", + "CleanOrphanedPagesStep", # Missing "CreateWorkspaceStep", # NEW "CopyTrackedFilesStep", # NEW "FindHotReloadPortStep", "RenderIndexHtmlStep", + "DeregisterPageFromArgoStep", # Missing "RegisterPageInArgoStep",🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/pages/tests/test_executor.py` around lines 259 - 296, The test test_start_page_pipeline_steps's mock_step_classes is missing two pipeline step classes declared in start_local_dev_server: add "CleanOrphanedPagesStep" and "DeregisterPageFromArgoStep" to the mock_step_classes list, ensure you set them on the mock_pipelines (via setattr) like the others, and keep the final loop that asserts getattr(mock_pipelines, step_class).assert_called_once() so those two steps are also verified as instantiated.
♻️ Duplicate comments (3)
cli/pages/engines/docker/container.py (1)
14-19:⚠️ Potential issue | 🟡 MinorDon't suppress every shutdown failure.
Line 16 still suppresses
Exception, so Docker/client errors and plain programming mistakes disappear during cleanup. Only the expected missing-container case should be ignored here; everything else should be logged or allowed to surface. This is still unresolved from the earlier review.Suggested fix
-from contextlib import suppress +import logging from dataclasses import dataclass from dataclasses import field +from docker.errors import DockerException + from cli.commons.engines.docker.container import BaseDockerContainerManager from cli.commons.exceptions import ContainerNotFoundError from cli.pages.engines.enums import PageEngineTypeEnum +logger = logging.getLogger(__name__) + `@dataclass` class PageDockerContainerManager(BaseDockerContainerManager): engine: PageEngineTypeEnum = field(default=PageEngineTypeEnum.DOCKER) def stop(self, label: str) -> None: - """Stop container gracefully (suppress errors).""" - with suppress(ContainerNotFoundError, Exception): + """Stop container gracefully.""" + try: container = self.get(label=label) if container.status == "running": container.stop() + except ContainerNotFoundError: + return + except DockerException: + logger.warning("Failed to stop container %s", label, exc_info=True)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/pages/engines/docker/container.py` around lines 14 - 19, The stop method currently suppresses all Exceptions; change it to only ignore the missing-container case so real Docker/client errors and bugs surface. Specifically, narrow the suppression to only the self.get call (use contextlib.suppress(ContainerNotFoundError) or a try/except catching ContainerNotFoundError) so that getting a missing container is ignored, but leave container.stop() unwrapped (or catch and log specific Docker errors) so other exceptions are not swallowed; refer to the stop method and ContainerNotFoundError to locate and update the code.cli/pages/engines/docker/validators.py (1)
20-20:⚠️ Potential issue | 🟡 MinorPass enum value, not enum object, into the exception.
self.engineis aPageEngineTypeEnum; passing the enum directly can produce verbose/noisy output. Useself.engine.valuefor cleaner user-facing messages.🛠️ Proposed fix
- raise EngineNotInstalledException(engine=self.engine) from error + raise EngineNotInstalledException(engine=self.engine.value) from error🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/pages/engines/docker/validators.py` at line 20, EngineNotInstalledException is being raised with the enum object (self.engine) which yields verbose output; change the raised argument to the enum's string value by passing self.engine.value into EngineNotInstalledException (update the raise site in validators.py where EngineNotInstalledException(engine=self.engine) is used) so the exception message is clean and user-facing; ensure PageEngineTypeEnum is not altered and only its .value is passed.cli/commons/helpers.py (1)
39-51:⚠️ Potential issue | 🟡 Minor
find_available_portsmay return fewer ports than requested without signaling failure.If the port range is exhausted, callers receive a shorter list than expected. This could cause unpacking errors or silent failures downstream (e.g., at line 104-106 where two ports are expected).
🛠️ Proposed fix
def find_available_ports( ports: list[int], start_range: int = 8040, end_range: int = 65535, ) -> list[int]: available: list[int] = [port for port in ports if is_port_available(port)] if len(available) < len(ports): for port in range(start_range, end_range + 1): if len(available) == len(ports): break if port not in available and is_port_available(port): available.append(port) + if len(available) < len(ports): + raise RuntimeError( + f"Could not find {len(ports)} available ports (found {len(available)})" + ) return available🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/commons/helpers.py` around lines 39 - 51, find_available_ports currently returns a shorter list if the requested number of ports can't be found, which causes downstream unpacking errors; modify find_available_ports to check after the search whether len(available) == len(ports) and if not raise a clear exception (e.g., ValueError) with details including requested ports, start_range/end_range and how many were found, so callers (who expect two ports) can catch/fail fast; reference is_port_available within the function to keep the same discovery logic but enforce this post-check and error path.
🧹 Nitpick comments (10)
cli/pages/helpers.py (1)
196-200: Consider extracting the hardcoded port to a constant.Port
8044is duplicated on lines 196 and 200. If this is a fixed architectural value, consider defining it as a constant in settings for maintainability and discoverability.♻️ Proposed refactor
In
cli/pages/helpers.py:+from cli.commons.settings import PAGES_FLASK_PORT # or define locally + def generate_page_url(page_name, routing_mode, container=None): sanitized = page_name.replace(" ", "-") if routing_mode == "subdomain": - flask_port = 8044 + flask_port = PAGES_FLASK_PORT return f"http://{sanitized}.localhost:{flask_port}/" if routing_mode == "path": - flask_port = 8044 + flask_port = PAGES_FLASK_PORT return f"http://localhost:{flask_port}/{sanitized}"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/pages/helpers.py` around lines 196 - 200, Extract the duplicated hardcoded port 8044 into a single constant (e.g., FLASK_PORT = 8044) and replace both uses of the local flask_port variable in the helper that builds the URL with that constant; update the module to import or define FLASK_PORT (in this file or a shared settings module) and use it in the code paths that reference flask_port and where routing_mode is "path" so the port is maintained in one place.cli/commons/helpers.py (1)
136-140: Movevalidate_engine_installed()outside the loop.The validator check is called for every image, but the engine installation status doesn't change between iterations. Call it once before the loop.
♻️ Proposed refactor
def verify_and_fetch_images(client: Any, image_names: list[str]) -> None: validator = client.get_validator() + validator.validate_engine_installed() + downloader = client.get_downloader() for image_name in image_names: - validator.validate_engine_installed() - downloader = client.get_downloader() downloader.pull_image(image_name=image_name)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/commons/helpers.py` around lines 136 - 140, The validator.check (validator.validate_engine_installed) is being called inside the loop for each image; move the installation validation out of the loop: obtain validator via client.get_validator(), call validator.validate_engine_installed() once before iterating image_names, then inside the loop only call client.get_downloader() and downloader.pull_image(image_name=image_name) for each image to avoid redundant checks.cli/pages/engines/templates/hot_reload_server.py (1)
120-121: Consider logging parse failures for debugging.Silent exception swallowing makes it difficult to diagnose malformed error payloads from browsers. Even minimal logging would help during development.
♻️ Proposed fix
except (json.JSONDecodeError, Exception): - pass + pass # Malformed payload; browser error not capturedOr for more visibility during development:
- except (json.JSONDecodeError, Exception): - pass + except json.JSONDecodeError: + print("[browser error] malformed JSON payload", flush=True) + except Exception as exc: + print(f"[browser error] unexpected error: {exc}", flush=True)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/pages/engines/templates/hot_reload_server.py` around lines 120 - 121, Replace the silent swallow in the except (json.JSONDecodeError, Exception) block with a logged error so parse failures are visible; capture the exception and log a concise message (including the exception message and the raw payload/line that failed) via the module logger (or processLogger) using logger.exception or logger.debug depending on environment, so malformed browser payloads are recorded for debugging while preserving existing control flow.cli/functions/helpers.py (1)
300-304: Consider moving imports to module level.The imports inside the function body avoid potential circular imports, but if this isn't necessary, module-level imports are preferred for clarity and consistency. If circular imports are the concern, this pattern is acceptable but worth documenting with a comment.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/functions/helpers.py` around lines 300 - 304, Imports for ARGO_API_BASE_PATH and HOST_BIND are currently inside the function; move those import statements to the module top so they are module-level imports (referencing ARGO_API_BASE_PATH and HOST_BIND) and remove the in-function imports near the frie_port and url construction; if those imports were placed here to avoid a circular import involving engine_settings.CONTAINER.FRIE.INTERNAL_PORT or argo_adapter_port, leave them in-place but add a brief comment explaining the circular import reason and referencing the symbols (engine_settings.CONTAINER.FRIE.INTERNAL_PORT, argo_adapter_port, frie_port, url) so future readers know why they remain inside the function.docs/development/architecture.md (1)
186-188: Minor style improvement: vary sentence openers.Three successive sentences begin with "Pages." Consider rephrasing for better readability:
✏️ Suggested rewrite
-Pages has no Podman implementation. Pages shares the same Argo container as functions — there are no -per-page Docker containers. The engine layer manages the workspace directory and the two long-running +There is no Podman implementation for Pages. The module shares the same Argo container as functions — there are no +per-page Docker containers. Instead, the engine layer manages the workspace directory and the two long-running subprocesses (`copy_watcher`, `hot_reload_server`) rather than containers.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/development/architecture.md` around lines 186 - 188, Three sentences in the paragraph all start with "Pages", hurting flow; reword to vary sentence openers by, for example, starting one sentence with "It" or "Instead" and another with "The engine layer" or "Argo", or by combining clauses—ensure the content still notes that Pages has no Podman implementation, that it shares the Argo container with functions (no per-page Docker containers), and that the engine layer manages the workspace directory and the two subprocesses `copy_watcher` and `hot_reload_server`; update the three sentences around the terms "Pages", "Argo", "functions", "engine layer", "workspace directory", "copy_watcher", and "hot_reload_server" to improve readability while preserving meaning.cli/pages/pipelines/dev_engine.py (3)
262-278: Consider adding SIGKILL fallback for hung processes.The graceful shutdown with SIGTERM and 5-second timeout is good. However, if the process doesn't terminate within the deadline, it's left running. Consider adding a SIGKILL fallback for resilience.
♻️ Proposed SIGKILL fallback
deadline = time.monotonic() + 5 while time.monotonic() < deadline: try: os.kill(pid, 0) time.sleep(0.1) except (ProcessLookupError, OSError): break + else: + # Process still alive after timeout — force kill + with suppress(ProcessLookupError, OSError): + os.kill(pid, signal.SIGKILL) pid_file.unlink(missing_ok=True)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/pages/pipelines/dev_engine.py` around lines 262 - 278, StopHotReloadSubprocessStep.execute currently sends SIGTERM and waits up to 5s but leaves a hung process running; after the existing graceful loop (which uses os.kill(pid, 0) and deadline), if the process is still alive you should send a SIGKILL via os.kill(pid, signal.SIGKILL) (wrap in suppress for ProcessLookupError/OSError) and optionally wait briefly to confirm termination before continuing to pid_file.unlink(missing_ok=True). Reference StopHotReloadSubprocessStep.execute, pid_file, os.kill, signal.SIGTERM and add the SIGKILL fallback after the timeout handling.
448-457: Silent exception swallowing hides potential debugging information.Multiple
try-except-passblocks in status checking could hide unexpected errors. While graceful degradation is appropriate for status checks, consider logging at debug level to aid troubleshooting.♻️ Proposed logging addition
+ import logging + logger = logging.getLogger(__name__) + argo_running = False try: resp = httpx.get(f"http://localhost:{argo_port}/api/_/route/", timeout=2.0) if resp.status_code == 200: for adapter in resp.json(): if adapter.get("label", "") == f"pages-{workspace_key}": argo_running = True break - except Exception: - pass + except Exception as exc: + logger.debug("Failed to check Argo status: %s", exc)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/pages/pipelines/dev_engine.py` around lines 448 - 457, The try/except around the httpx.get call currently swallows all exceptions (hiding useful debug info); update the block that sets argo_running (the httpx.get call using argo_port and workspace_key) to catch Exception as e and emit a debug-level log with the exception and context (e.g., "failed to check Argo adapter for pages-{workspace_key} on port {argo_port}: {e}") using the module's logger (or create one if missing) instead of a bare pass, while preserving the existing graceful behavior (do not re-raise).
167-199: Consider narrowing the exception handling for better debuggability.The
TryGetArgoPortStepcatches all exceptions silently, which is appropriate for a non-fatal status check. However, catching specific Docker exceptions (e.g.,docker.errors.NotFound,docker.errors.APIError) would help distinguish expected failures from unexpected ones during debugging.♻️ Proposed narrower exception handling
+ from docker.errors import APIError + from docker.errors import NotFound + try: client = data["client"] container = client.client.containers.get(ARGO_CONTAINER_NAME) # ... rest of logic - except Exception: + except (NotFound, APIError, KeyError, TypeError): data["argo_adapter_port"] = None data["argo_target_port"] = None🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/pages/pipelines/dev_engine.py` around lines 167 - 199, The TryGetArgoPortStep.execute currently swallows all exceptions; narrow this by importing docker.errors and catching specific Docker exceptions (e.g., docker.errors.NotFound and docker.errors.APIError) to set data["argo_adapter_port"] and data["argo_target_port"] to None when the container is missing or the Docker API fails, optionally catch docker.errors.DockerException as a broader Docker-only fallback, and let other unexpected exceptions propagate (or be logged) so they aren't silently ignored; reference the TryGetArgoPortStep class, its execute method, ARGO_CONTAINER_NAME constant and the data keys "argo_adapter_port"/"argo_target_port" when making the changes.cli/pages/engines/helpers.py (1)
178-200: Registration retry lacks error context and uses arbitrary delays.The
time.sleep(1)before registration andtime.sleep(2)before retry are magic numbers without explanation. If registration fails twice, the error fromraise_for_status()won't indicate this was a retry scenario.♻️ Proposed improvement with logging and constants
+import logging + +logger = logging.getLogger(__name__) +_ARGO_STARTUP_DELAY = 1 # seconds to wait for Argo to be ready +_ARGO_RETRY_DELAY = 2 # seconds to wait before retry def register_page_in_argo(workspace_key: str, argo_admin_port: int) -> None: """Register a local_file route for workspace_key in Argo.""" - time.sleep(1) + time.sleep(_ARGO_STARTUP_DELAY) payload = { ... } url = f"http://localhost:{argo_admin_port}/{ARGO_API_BASE_PATH}/" response = httpx.post(url, json=payload, timeout=10.0) if not response.is_success: - time.sleep(2) + logger.debug("Argo registration failed (status %s), retrying...", response.status_code) + time.sleep(_ARGO_RETRY_DELAY) response = httpx.post(url, json=payload, timeout=10.0) response.raise_for_status()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/pages/engines/helpers.py` around lines 178 - 200, The register_page_in_argo function currently uses unexplained time.sleep(1) and time.sleep(2) magic delays and loses context when raise_for_status() is called after a retry; replace those sleeps with named constants (e.g., REGISTRATION_INITIAL_DELAY, REGISTRATION_RETRY_DELAY) and add structured logging around the HTTP calls (use a logger to log attempt number, payload path/label, URL and response status) before retrying; on the second failure, raise a clear exception or call response.raise_for_status() with additional context indicating it was the retry attempt (include workspace_key and attempt count) and consider using a small retry loop around httpx.post so the logic is centralized (refer to register_page_in_argo, ARGO_API_BASE_PATH, and the httpx.post calls to locate the change).cli/pages/commands.py (1)
195-200: Clarify "not yet implemented" status in logs command.The docstring says logs are "not yet implemented for the Argo-based engine," but the executor calls
ShowPageLogsStepwhich does tail log files. Consider updating the docstring to reflect actual functionality.♻️ Suggested docstring update
- """Shows dev server logs. Not yet implemented for the Argo-based engine.""" + """Shows dev server logs (copy watcher and hot reload)."""🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/pages/commands.py` around lines 195 - 200, The docstring for the logs command incorrectly states "Not yet implemented for the Argo-based engine" while the executor call (executor.logs_local_dev_server) invokes ShowPageLogsStep which tails log files; update the docstring to accurately describe what the command does (e.g., "Shows dev server logs; on Argo-based engine it tails page logs via ShowPageLogsStep") and mention the supported options (tail, follow, verbose) so the behavior matches executor.logs_local_dev_server and ShowPageLogsStep.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@cli/commons/helpers.py`:
- Around line 97-99: The GET and DELETE calls (resp = httpx.get(url) and
httpx.delete(url)) are missing timeouts; update both calls to include a sensible
timeout (e.g., timeout=5 or an httpx.Timeout instance) and handle
timeout/network errors by catching httpx.RequestError or httpx.TimeoutException
so the helper does not hang or crash if the Argo container is unresponsive;
ensure you apply the change where those two calls appear and preserve existing
response-status logic.
In `@cli/pages/engines/docker/network.py`:
- Around line 14-28: FunctionDockerNetworkManager is still referencing the
flattened setting key (engine_settings.CONTAINER.NETWORK_NAME) while the Pages
module uses the nested path; update the manager to use the nested settings
structure (page_engine_settings.CONTAINER.NETWORK.NAME and
page_engine_settings.CONTAINER.NETWORK.DRIVER) so the create method (create) and
the default list names assignment (list) reference
page_engine_settings.CONTAINER.NETWORK.NAME and the driver uses
page_engine_settings.CONTAINER.NETWORK.DRIVER; keep the existing exception
handling in get unchanged.
---
Outside diff comments:
In `@cli/pages/tests/test_executor.py`:
- Around line 259-296: The test test_start_page_pipeline_steps's
mock_step_classes is missing two pipeline step classes declared in
start_local_dev_server: add "CleanOrphanedPagesStep" and
"DeregisterPageFromArgoStep" to the mock_step_classes list, ensure you set them
on the mock_pipelines (via setattr) like the others, and keep the final loop
that asserts getattr(mock_pipelines, step_class).assert_called_once() so those
two steps are also verified as instantiated.
---
Duplicate comments:
In `@cli/commons/helpers.py`:
- Around line 39-51: find_available_ports currently returns a shorter list if
the requested number of ports can't be found, which causes downstream unpacking
errors; modify find_available_ports to check after the search whether
len(available) == len(ports) and if not raise a clear exception (e.g.,
ValueError) with details including requested ports, start_range/end_range and
how many were found, so callers (who expect two ports) can catch/fail fast;
reference is_port_available within the function to keep the same discovery logic
but enforce this post-check and error path.
In `@cli/pages/engines/docker/container.py`:
- Around line 14-19: The stop method currently suppresses all Exceptions; change
it to only ignore the missing-container case so real Docker/client errors and
bugs surface. Specifically, narrow the suppression to only the self.get call
(use contextlib.suppress(ContainerNotFoundError) or a try/except catching
ContainerNotFoundError) so that getting a missing container is ignored, but
leave container.stop() unwrapped (or catch and log specific Docker errors) so
other exceptions are not swallowed; refer to the stop method and
ContainerNotFoundError to locate and update the code.
In `@cli/pages/engines/docker/validators.py`:
- Line 20: EngineNotInstalledException is being raised with the enum object
(self.engine) which yields verbose output; change the raised argument to the
enum's string value by passing self.engine.value into
EngineNotInstalledException (update the raise site in validators.py where
EngineNotInstalledException(engine=self.engine) is used) so the exception
message is clean and user-facing; ensure PageEngineTypeEnum is not altered and
only its .value is passed.
---
Nitpick comments:
In `@cli/commons/helpers.py`:
- Around line 136-140: The validator.check (validator.validate_engine_installed)
is being called inside the loop for each image; move the installation validation
out of the loop: obtain validator via client.get_validator(), call
validator.validate_engine_installed() once before iterating image_names, then
inside the loop only call client.get_downloader() and
downloader.pull_image(image_name=image_name) for each image to avoid redundant
checks.
In `@cli/functions/helpers.py`:
- Around line 300-304: Imports for ARGO_API_BASE_PATH and HOST_BIND are
currently inside the function; move those import statements to the module top so
they are module-level imports (referencing ARGO_API_BASE_PATH and HOST_BIND) and
remove the in-function imports near the frie_port and url construction; if those
imports were placed here to avoid a circular import involving
engine_settings.CONTAINER.FRIE.INTERNAL_PORT or argo_adapter_port, leave them
in-place but add a brief comment explaining the circular import reason and
referencing the symbols (engine_settings.CONTAINER.FRIE.INTERNAL_PORT,
argo_adapter_port, frie_port, url) so future readers know why they remain inside
the function.
In `@cli/pages/commands.py`:
- Around line 195-200: The docstring for the logs command incorrectly states
"Not yet implemented for the Argo-based engine" while the executor call
(executor.logs_local_dev_server) invokes ShowPageLogsStep which tails log files;
update the docstring to accurately describe what the command does (e.g., "Shows
dev server logs; on Argo-based engine it tails page logs via ShowPageLogsStep")
and mention the supported options (tail, follow, verbose) so the behavior
matches executor.logs_local_dev_server and ShowPageLogsStep.
In `@cli/pages/engines/helpers.py`:
- Around line 178-200: The register_page_in_argo function currently uses
unexplained time.sleep(1) and time.sleep(2) magic delays and loses context when
raise_for_status() is called after a retry; replace those sleeps with named
constants (e.g., REGISTRATION_INITIAL_DELAY, REGISTRATION_RETRY_DELAY) and add
structured logging around the HTTP calls (use a logger to log attempt number,
payload path/label, URL and response status) before retrying; on the second
failure, raise a clear exception or call response.raise_for_status() with
additional context indicating it was the retry attempt (include workspace_key
and attempt count) and consider using a small retry loop around httpx.post so
the logic is centralized (refer to register_page_in_argo, ARGO_API_BASE_PATH,
and the httpx.post calls to locate the change).
In `@cli/pages/engines/templates/hot_reload_server.py`:
- Around line 120-121: Replace the silent swallow in the except
(json.JSONDecodeError, Exception) block with a logged error so parse failures
are visible; capture the exception and log a concise message (including the
exception message and the raw payload/line that failed) via the module logger
(or processLogger) using logger.exception or logger.debug depending on
environment, so malformed browser payloads are recorded for debugging while
preserving existing control flow.
In `@cli/pages/helpers.py`:
- Around line 196-200: Extract the duplicated hardcoded port 8044 into a single
constant (e.g., FLASK_PORT = 8044) and replace both uses of the local flask_port
variable in the helper that builds the URL with that constant; update the module
to import or define FLASK_PORT (in this file or a shared settings module) and
use it in the code paths that reference flask_port and where routing_mode is
"path" so the port is maintained in one place.
In `@cli/pages/pipelines/dev_engine.py`:
- Around line 262-278: StopHotReloadSubprocessStep.execute currently sends
SIGTERM and waits up to 5s but leaves a hung process running; after the existing
graceful loop (which uses os.kill(pid, 0) and deadline), if the process is still
alive you should send a SIGKILL via os.kill(pid, signal.SIGKILL) (wrap in
suppress for ProcessLookupError/OSError) and optionally wait briefly to confirm
termination before continuing to pid_file.unlink(missing_ok=True). Reference
StopHotReloadSubprocessStep.execute, pid_file, os.kill, signal.SIGTERM and add
the SIGKILL fallback after the timeout handling.
- Around line 448-457: The try/except around the httpx.get call currently
swallows all exceptions (hiding useful debug info); update the block that sets
argo_running (the httpx.get call using argo_port and workspace_key) to catch
Exception as e and emit a debug-level log with the exception and context (e.g.,
"failed to check Argo adapter for pages-{workspace_key} on port {argo_port}:
{e}") using the module's logger (or create one if missing) instead of a bare
pass, while preserving the existing graceful behavior (do not re-raise).
- Around line 167-199: The TryGetArgoPortStep.execute currently swallows all
exceptions; narrow this by importing docker.errors and catching specific Docker
exceptions (e.g., docker.errors.NotFound and docker.errors.APIError) to set
data["argo_adapter_port"] and data["argo_target_port"] to None when the
container is missing or the Docker API fails, optionally catch
docker.errors.DockerException as a broader Docker-only fallback, and let other
unexpected exceptions propagate (or be logged) so they aren't silently ignored;
reference the TryGetArgoPortStep class, its execute method, ARGO_CONTAINER_NAME
constant and the data keys "argo_adapter_port"/"argo_target_port" when making
the changes.
In `@docs/development/architecture.md`:
- Around line 186-188: Three sentences in the paragraph all start with "Pages",
hurting flow; reword to vary sentence openers by, for example, starting one
sentence with "It" or "Instead" and another with "The engine layer" or "Argo",
or by combining clauses—ensure the content still notes that Pages has no Podman
implementation, that it shares the Argo container with functions (no per-page
Docker containers), and that the engine layer manages the workspace directory
and the two subprocesses `copy_watcher` and `hot_reload_server`; update the
three sentences around the terms "Pages", "Argo", "functions", "engine layer",
"workspace directory", "copy_watcher", and "hot_reload_server" to improve
readability while preserving meaning.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: c0494b0f-71f7-450c-89fb-414ed16cdee8
📒 Files selected for processing (59)
cli/commons/engines/__init__.pycli/commons/engines/docker/__init__.pycli/commons/engines/docker/client.pycli/commons/engines/docker/container.pycli/commons/engines/docker/network.pycli/commons/exceptions.pycli/commons/helpers.pycli/commons/helpers_tests.pycli/commons/settings.pycli/functions/commands.pycli/functions/engines/docker/client.pycli/functions/engines/docker/container.pycli/functions/engines/docker/network.pycli/functions/engines/exceptions.pycli/functions/engines/models.pycli/functions/engines/settings.pycli/functions/engines/tests/test_models.pycli/functions/helpers.pycli/functions/pipelines.pycli/functions/tests/test_utils.pycli/pages/commands.pycli/pages/engines/abstracts/__init__.pycli/pages/engines/abstracts/client.pycli/pages/engines/abstracts/container.pycli/pages/engines/abstracts/network.pycli/pages/engines/docker/Dockerfilecli/pages/engines/docker/README.mdcli/pages/engines/docker/build_image.pycli/pages/engines/docker/client.pycli/pages/engines/docker/container.pycli/pages/engines/docker/image.pycli/pages/engines/docker/network.pycli/pages/engines/docker/validators.pycli/pages/engines/exceptions.pycli/pages/engines/helpers.pycli/pages/engines/settings.pycli/pages/engines/templates/copy_watcher.pycli/pages/engines/templates/flask_manager.pycli/pages/engines/templates/hot_reload_server.pycli/pages/engines/templates/page_server.pycli/pages/executor.pycli/pages/handlers.pycli/pages/helpers.pycli/pages/models.pycli/pages/pipelines/__init__.pycli/pages/pipelines/cloud_crud.pycli/pages/pipelines/dev_engine.pycli/pages/pipelines/dev_scaffold.pycli/pages/pipelines/sync.pycli/pages/tests/engines/test_helpers.pycli/pages/tests/test_dev_commands.pycli/pages/tests/test_dev_engine_steps.pycli/pages/tests/test_dev_scaffold_workspace.pycli/pages/tests/test_executor.pycli/pages/tests/test_pipelines.pycli/pages/tests/test_pull_workspace.pycli/settings.pydocs/development/architecture.mddocs/development/modules.md
💤 Files with no reviewable changes (9)
- cli/pages/engines/abstracts/network.py
- cli/pages/engines/abstracts/init.py
- cli/pages/engines/docker/Dockerfile
- cli/pages/engines/docker/README.md
- cli/pages/engines/docker/build_image.py
- cli/pages/engines/abstracts/container.py
- cli/pages/engines/abstracts/client.py
- cli/pages/engines/templates/flask_manager.py
- cli/pages/engines/templates/page_server.py
✅ Files skipped from review due to trivial changes (8)
- cli/functions/engines/exceptions.py
- cli/functions/engines/docker/client.py
- cli/pages/tests/test_dev_scaffold_workspace.py
- cli/commons/engines/docker/client.py
- cli/pages/engines/exceptions.py
- cli/commons/settings.py
- cli/pages/tests/engines/test_helpers.py
- cli/pages/tests/test_pull_workspace.py
🚧 Files skipped from review as they are similar to previous changes (17)
- cli/pages/tests/test_dev_commands.py
- cli/pages/models.py
- cli/functions/engines/tests/test_models.py
- cli/functions/tests/test_utils.py
- cli/commons/helpers_tests.py
- cli/commons/engines/docker/network.py
- cli/pages/handlers.py
- cli/pages/engines/docker/image.py
- cli/functions/engines/docker/network.py
- cli/functions/engines/models.py
- cli/commons/exceptions.py
- cli/functions/pipelines.py
- cli/pages/tests/test_pipelines.py
- cli/commons/engines/docker/container.py
- cli/pages/engines/docker/client.py
- cli/pages/pipelines/init.py
- cli/pages/pipelines/sync.py
| def create( | ||
| self, | ||
| name: str = page_engine_settings.CONTAINER.NETWORK_NAME, | ||
| driver: ContainerNetworkModeEnum = page_engine_settings.CONTAINER.NETWORK_DRIVER, | ||
| name: str = page_engine_settings.CONTAINER.NETWORK.NAME, | ||
| driver: ContainerNetworkModeEnum = page_engine_settings.CONTAINER.NETWORK.DRIVER, | ||
| ): | ||
| """Create a new network""" | ||
| return self.client.networks.create(name=name, driver=driver) | ||
|
|
||
| def get(self, network_id: str): | ||
| """Get network by ID or name""" | ||
| try: | ||
| return self.client.networks.get(network_id) | ||
| except NotFound as error: | ||
| raise NetworkNotFoundException(network_id) from error | ||
|
|
||
| def list(self, names: list[str] | None = None): | ||
| """List networks by names""" | ||
| names = names or [page_engine_settings.CONTAINER.NETWORK_NAME] | ||
| names = names or [page_engine_settings.CONTAINER.NETWORK.NAME] |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check if FunctionDockerNetworkManager uses old or new settings structure
rg -n "NETWORK_NAME|NETWORK\.NAME" cli/functions/engines/Repository: ubidots/ubidots-cli
Length of output: 357
Update FunctionDockerNetworkManager to use nested settings path for consistency.
The Pages module uses page_engine_settings.CONTAINER.NETWORK.NAME (nested), but FunctionDockerNetworkManager still uses the flattened engine_settings.CONTAINER.NETWORK_NAME. Update lines 13 and 26 in cli/functions/engines/docker/network.py to align with the Pages module's nested settings structure.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cli/pages/engines/docker/network.py` around lines 14 - 28,
FunctionDockerNetworkManager is still referencing the flattened setting key
(engine_settings.CONTAINER.NETWORK_NAME) while the Pages module uses the nested
path; update the manager to use the nested settings structure
(page_engine_settings.CONTAINER.NETWORK.NAME and
page_engine_settings.CONTAINER.NETWORK.DRIVER) so the create method (create) and
the default list names assignment (list) reference
page_engine_settings.CONTAINER.NETWORK.NAME and the driver uses
page_engine_settings.CONTAINER.NETWORK.DRIVER; keep the existing exception
handling in get unchanged.
There was a problem hiding this comment.
No estoy de acuerdo
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check the current state of FunctionDockerNetworkManager
cat cli/functions/engines/docker/network.py
echo "---"
# Check what engine_settings.CONTAINER looks like for functions
cat cli/functions/engines/settings.py
Length of output: 2529
@RandomGenericUsername, you are right — I apologize for the incorrect comment. After inspecting cli/functions/engines/settings.py, I can confirm that the functions module's ContainerSettings keeps NETWORK_NAME and NETWORK_DRIVER as flat fields directly on the class — there is no nested NETWORK object in that module. The FunctionDockerNetworkManager correctly references engine_settings.CONTAINER.NETWORK_NAME for its own independent settings schema. The nested CONTAINER.NETWORK.NAME path only exists in the Pages module, which was separately refactored. No change is needed here.
There was a problem hiding this comment.
Actionable comments posted: 4
♻️ Duplicate comments (1)
cli/commons/helpers.py (1)
98-104:⚠️ Potential issue | 🟠 MajorMake adapter cleanup HTTP calls non-fatal.
At Lines 101-103, request failures (timeout/DNS/connection) will raise and can fail the whole Argo reuse path even when the container is otherwise healthy. This cleanup should be best-effort.
🛠️ Proposed fix
if container.status == _RUNNING and frie_label: port = _get_external_port(container, ARGO_INTERNAL_ADAPTER_PORT) url = f"http://{HOST_BIND}:{port}/{ARGO_API_BASE_PATH}/~{frie_label}" - resp = httpx.get(url, timeout=5.0) - if resp.status_code == httpx.codes.OK: - httpx.delete(url, timeout=5.0) + try: + resp = httpx.get(url, timeout=5.0) + if resp.status_code == httpx.codes.OK: + httpx.delete(url, timeout=5.0) + except httpx.RequestError: + pass#!/bin/bash rg -nP 'httpx\.(get|delete)\(' -C2 cli/commons/helpers.py rg -nP 'except\s+httpx\.(RequestError|TimeoutException)' -C2 cli/commons/helpers.py🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/commons/helpers.py` around lines 98 - 104, The HTTP cleanup calls to the adapter (httpx.get and httpx.delete) in the block that runs when container.status == _RUNNING and frie_label are currently able to raise network exceptions and abort reuse; wrap the call sequence around a try/except that catches httpx.RequestError and httpx.TimeoutException (or the generic httpx.HTTPError if preferred), so failures are best-effort: attempt _get_external_port(...), build the URL, and call httpx.get(...) and httpx.delete(... ) inside the try, check resp.status_code as before, but on exception log a non-fatal debug/warn message and continue to return container; ensure the exception types from httpx are imported/used and no exception propagates out of this cleanup path (symbols: container.status, _RUNNING, frie_label, _get_external_port, ARGO_INTERNAL_ADAPTER_PORT, ARGO_API_BASE_PATH, httpx.get, httpx.delete).
🧹 Nitpick comments (1)
cli/commons/helpers.py (1)
140-144: Avoid repeated engine validation and downloader lookup inside the image loop.At Lines 142-144,
validate_engine_installed()andget_downloader()are called per image. Validate once and reuse downloader to reduce redundant ping/lookup work.♻️ Proposed refactor
def verify_and_fetch_images(client: Any, image_names: list[str]) -> None: @@ validator = client.get_validator() - for image_name in image_names: - validator.validate_engine_installed() - downloader = client.get_downloader() + validator.validate_engine_installed() + downloader = client.get_downloader() + for image_name in image_names: downloader.pull_image(image_name=image_name)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/commons/helpers.py` around lines 140 - 144, The loop calls client.get_validator().validate_engine_installed() and client.get_downloader() for each image, causing redundant validation and downloader lookups; move the validation and downloader acquisition out of the image_names loop by calling client.get_validator().validate_engine_installed() once before the loop and assigning downloader = client.get_downloader() once, then iterate image_names and call downloader.pull_image(image_name=image_name) inside the loop (referencing get_validator, validate_engine_installed, get_downloader, and pull_image).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@cli/pages/engines/helpers.py`:
- Around line 85-107: get_tracked_files currently appends manifest paths
directly (from model.js_libraries, model.css_libraries, model.link_libraries,
model.static_paths) which allows absolute paths or paths with ".." to escape
source_dir; change the logic to resolve/normalize each candidate path (use
source_dir / path then call .resolve() or equivalent) and only append it to
tracked if the resolved path is contained within source_dir (e.g.,
path.is_relative_to(source_dir) or comparing parents) and is a file; apply the
same containment check for static_abs before rglob/append, and still filter by
_COPY_EXCLUDED at the end.
- Around line 199-203: The current logic calls httpx.post(url, json=payload,
timeout=10.0) once and only retries when response.is_success is false, so
transport errors that raise during httpx.post are not retried; update the block
to perform the POST inside a try/except that catches httpx.HTTPError (or more
specific transport exceptions) and on exception sleep(2) and retry the request
once (or as configured), then call response.raise_for_status() after a
successful response; reference and update the existing response variable usage,
httpx.post calls, response.is_success check, and response.raise_for_status to
ensure both non-2xx responses and HTTP/transport exceptions are retried.
In `@cli/pages/engines/templates/copy_watcher.py`:
- Around line 113-114: The current check uses path.name == "manifest.toml" which
matches nested files; change it to only trigger when the manifest is the
repository/watch root manifest by comparing the full path to the root manifest
(e.g., replace the condition with path == root_dir / "manifest.toml" or
path.parent == root_dir), so the re-sync in the block that logs "manifest.toml
changed — re-syncing all files" only runs for the root manifest file.
- Around line 62-67: The full-sync function _copy_all currently only copies
tracked files returned by _get_tracked and never removes stale files in
workspace_dir; update _copy_all to compute the set of expected target paths
(based on tracked list and their relative destinations), delete any
files/directories present in workspace_dir that are not in that set (pruning
stale artifacts), then proceed to call _copy_file for each tracked path; use
_get_tracked to build the expected list and ensure deletions handle both files
and empty directories safely before returning the tracked list.
---
Duplicate comments:
In `@cli/commons/helpers.py`:
- Around line 98-104: The HTTP cleanup calls to the adapter (httpx.get and
httpx.delete) in the block that runs when container.status == _RUNNING and
frie_label are currently able to raise network exceptions and abort reuse; wrap
the call sequence around a try/except that catches httpx.RequestError and
httpx.TimeoutException (or the generic httpx.HTTPError if preferred), so
failures are best-effort: attempt _get_external_port(...), build the URL, and
call httpx.get(...) and httpx.delete(... ) inside the try, check
resp.status_code as before, but on exception log a non-fatal debug/warn message
and continue to return container; ensure the exception types from httpx are
imported/used and no exception propagates out of this cleanup path (symbols:
container.status, _RUNNING, frie_label, _get_external_port,
ARGO_INTERNAL_ADAPTER_PORT, ARGO_API_BASE_PATH, httpx.get, httpx.delete).
---
Nitpick comments:
In `@cli/commons/helpers.py`:
- Around line 140-144: The loop calls
client.get_validator().validate_engine_installed() and client.get_downloader()
for each image, causing redundant validation and downloader lookups; move the
validation and downloader acquisition out of the image_names loop by calling
client.get_validator().validate_engine_installed() once before the loop and
assigning downloader = client.get_downloader() once, then iterate image_names
and call downloader.pull_image(image_name=image_name) inside the loop
(referencing get_validator, validate_engine_installed, get_downloader, and
pull_image).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 03c59081-56f7-44c2-8f25-a8e27f75bd71
📒 Files selected for processing (7)
cli/commons/helpers.pycli/pages/engines/docker/container.pycli/pages/engines/docker/validators.pycli/pages/engines/helpers.pycli/pages/engines/templates/copy_watcher.pycli/pages/handlers.pydocs/development/modules.md
✅ Files skipped from review due to trivial changes (1)
- docs/development/modules.md
🚧 Files skipped from review as they are similar to previous changes (2)
- cli/pages/handlers.py
- cli/pages/engines/docker/container.py
| for entry in model.js_libraries: | ||
| src = entry.get("src", "") | ||
| if src and not src.startswith(("http://", "https://")): | ||
| tracked.append(source_dir / src) | ||
|
|
||
| def check_container_status(): | ||
| """Check if page container is already running""" | ||
| container = None | ||
| try: | ||
| container = container_manager.get(container_name) | ||
| except ContainerNotFoundException: | ||
| return None | ||
| for entry in model.css_libraries: | ||
| href = entry.get("href", "") | ||
| if href and not href.startswith(("http://", "https://")): | ||
| tracked.append(source_dir / href) | ||
|
|
||
| if container is None: | ||
| return None | ||
| for entry in model.link_libraries: | ||
| href = entry.get("href", "") | ||
| if href and not href.startswith(("http://", "https://")): | ||
| tracked.append(source_dir / href) | ||
|
|
||
| if container.status == ContainerStatusEnum.RUNNING: | ||
| return container | ||
| for static_path in model.static_paths: | ||
| static_abs = source_dir / static_path | ||
| if static_abs.is_dir(): | ||
| tracked.extend(f for f in static_abs.rglob("*") if f.is_file()) | ||
| elif static_abs.is_file(): | ||
| tracked.append(static_abs) | ||
|
|
||
| return None | ||
| return [f for f in tracked if f.is_file() and f.name not in _COPY_EXCLUDED] |
There was a problem hiding this comment.
Reject tracked asset paths that escape source_dir.
Manifest entries can be absolute paths or include ..; those should be filtered out. Otherwise get_tracked_files() can track files outside the project root and create noisy/unsafe behavior downstream.
💡 Suggested fix (normalize + root containment check)
def get_tracked_files(source_dir: Path) -> list[Path]:
@@
- tracked: list[Path] = [
+ source_root = source_dir.resolve()
+
+ def _resolve_local(rel_or_abs: str) -> Path | None:
+ candidate = (source_root / rel_or_abs).resolve()
+ try:
+ candidate.relative_to(source_root)
+ except ValueError:
+ logger.warning("Ignoring out-of-root tracked path: %s", rel_or_abs)
+ return None
+ return candidate
+
+ tracked: list[Path] = [
source_dir / "body.html",
source_dir / "manifest.toml",
]
@@
for entry in model.js_libraries:
src = entry.get("src", "")
if src and not src.startswith(("http://", "https://")):
- tracked.append(source_dir / src)
+ local = _resolve_local(src)
+ if local:
+ tracked.append(local)
@@
for entry in model.css_libraries:
href = entry.get("href", "")
if href and not href.startswith(("http://", "https://")):
- tracked.append(source_dir / href)
+ local = _resolve_local(href)
+ if local:
+ tracked.append(local)
@@
for entry in model.link_libraries:
href = entry.get("href", "")
if href and not href.startswith(("http://", "https://")):
- tracked.append(source_dir / href)
+ local = _resolve_local(href)
+ if local:
+ tracked.append(local)
@@
for static_path in model.static_paths:
- static_abs = source_dir / static_path
+ static_abs = _resolve_local(static_path)
+ if not static_abs:
+ continue
if static_abs.is_dir():
tracked.extend(f for f in static_abs.rglob("*") if f.is_file())
elif static_abs.is_file():
tracked.append(static_abs)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cli/pages/engines/helpers.py` around lines 85 - 107, get_tracked_files
currently appends manifest paths directly (from model.js_libraries,
model.css_libraries, model.link_libraries, model.static_paths) which allows
absolute paths or paths with ".." to escape source_dir; change the logic to
resolve/normalize each candidate path (use source_dir / path then call
.resolve() or equivalent) and only append it to tracked if the resolved path is
contained within source_dir (e.g., path.is_relative_to(source_dir) or comparing
parents) and is a file; apply the same containment check for static_abs before
rglob/append, and still filter by _COPY_EXCLUDED at the end.
| response = httpx.post(url, json=payload, timeout=10.0) | ||
| if not response.is_success: | ||
| time.sleep(2) | ||
| response = httpx.post(url, json=payload, timeout=10.0) | ||
| response.raise_for_status() |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify register flow only retries on non-success response today,
# and does not catch request/transport exceptions.
rg -n -C4 'def register_page_in_argo|httpx\.post|is_success|raise_for_status|except httpx' cli/pages/engines/helpers.pyRepository: ubidots/ubidots-cli
Length of output: 907
🏁 Script executed:
#!/bin/bash
# Read the complete register_page_in_argo function
sed -n '181,210p' cli/pages/engines/helpers.pyRepository: ubidots/ubidots-cli
Length of output: 1224
🏁 Script executed:
#!/bin/bash
# Find where register_page_in_argo is called to see if caller handles exceptions
rg -n 'register_page_in_argo' --type pyRepository: ubidots/ubidots-cli
Length of output: 1033
🏁 Script executed:
#!/bin/bash
# Check imports to understand httpx behavior
head -50 cli/pages/engines/helpers.py | grep -n 'import'Repository: ubidots/ubidots-cli
Length of output: 240
🏁 Script executed:
#!/bin/bash
# Check context around where register_page_in_argo is called
sed -n '215,235p' cli/pages/pipelines/dev_engine.pyRepository: ubidots/ubidots-cli
Length of output: 668
🏁 Script executed:
#!/bin/bash
# Check if there's broader try/except in dev_engine.py around this call
rg -n -B10 -A5 'register_page_in_argo\(data' cli/pages/pipelines/dev_engine.pyRepository: ubidots/ubidots-cli
Length of output: 577
🏁 Script executed:
#!/bin/bash
# Find PipelineStep definition
rg -n 'class PipelineStep' --type pyRepository: ubidots/ubidots-cli
Length of output: 110
🏁 Script executed:
#!/bin/bash
# Check what PipelineStep looks like - find the full class definition
rg -n -A20 'class PipelineStep' cli/pages/pipelines/dev_engine.py | head -40Repository: ubidots/ubidots-cli
Length of output: 45
🏁 Script executed:
#!/bin/bash
# Read PipelineStep class definition
sed -n '41,100p' cli/commons/pipelines.pyRepository: ubidots/ubidots-cli
Length of output: 1607
🏁 Script executed:
#!/antml:bash
# Check how the pipeline executes steps - look for execute calls
rg -n 'execute\(' cli/commons/pipelines.py | head -20Repository: ubidots/ubidots-cli
Length of output: 169
Retry logic should handle request/transport failures.
If the first httpx.post(...) raises an exception (e.g., connection refused while Argo boots), no retry happens and registration fails immediately. The current code only retries on non-success HTTP responses but not on transport/connection errors. Wrap attempts in try/except and retry on httpx.HTTPError as well.
Suggested fix
- response = httpx.post(url, json=payload, timeout=10.0)
- if not response.is_success:
- time.sleep(2)
- response = httpx.post(url, json=payload, timeout=10.0)
- response.raise_for_status()
+ for attempt in range(2):
+ try:
+ response = httpx.post(url, json=payload, timeout=10.0)
+ response.raise_for_status()
+ return
+ except httpx.HTTPError:
+ if attempt == 1:
+ raise
+ time.sleep(2)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cli/pages/engines/helpers.py` around lines 199 - 203, The current logic calls
httpx.post(url, json=payload, timeout=10.0) once and only retries when
response.is_success is false, so transport errors that raise during httpx.post
are not retried; update the block to perform the POST inside a try/except that
catches httpx.HTTPError (or more specific transport exceptions) and on exception
sleep(2) and retry the request once (or as configured), then call
response.raise_for_status() after a successful response; reference and update
the existing response variable usage, httpx.post calls, response.is_success
check, and response.raise_for_status to ensure both non-2xx responses and
HTTP/transport exceptions are retried.
| def _copy_all(source_dir: Path, workspace_dir: Path) -> list[Path]: | ||
| tracked = _get_tracked(source_dir) | ||
| for f in tracked: | ||
| _copy_file(f, source_dir, workspace_dir) | ||
| return tracked | ||
|
|
There was a problem hiding this comment.
Workspace sync never removes deleted/untracked files.
_copy_all() only copies current tracked files. If a file is removed from manifest.toml or deleted from a tracked static directory, stale copies remain in workspace_dir and can still be served.
💡 Suggested fix (prune removed files during full re-sync)
-def _copy_all(source_dir: Path, workspace_dir: Path) -> list[Path]:
+def _copy_all(
+ source_dir: Path,
+ workspace_dir: Path,
+ previous_tracked: set[str] | None = None,
+) -> list[Path]:
tracked = _get_tracked(source_dir)
for f in tracked:
_copy_file(f, source_dir, workspace_dir)
+
+ if previous_tracked is not None:
+ current = {str(f) for f in tracked}
+ removed = previous_tracked - current
+ for old in removed:
+ old_path = Path(old)
+ try:
+ rel = old_path.relative_to(source_dir)
+ except ValueError:
+ continue
+ dst = workspace_dir / rel
+ if dst.exists():
+ dst.unlink()
+
return tracked🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cli/pages/engines/templates/copy_watcher.py` around lines 62 - 67, The
full-sync function _copy_all currently only copies tracked files returned by
_get_tracked and never removes stale files in workspace_dir; update _copy_all to
compute the set of expected target paths (based on tracked list and their
relative destinations), delete any files/directories present in workspace_dir
that are not in that set (pruning stale artifacts), then proceed to call
_copy_file for each tracked path; use _get_tracked to build the expected list
and ensure deletions handle both files and empty directories safely before
returning the tracked list.
| if path.name == "manifest.toml": | ||
| logging.info("manifest.toml changed — re-syncing all files") |
There was a problem hiding this comment.
Restrict manifest re-sync trigger to the root manifest file.
Line [113] currently matches any file named manifest.toml under the tree. A nested asset with that filename will trigger a full re-sync unnecessarily.
💡 Suggested fix
- if path.name == "manifest.toml":
+ if path == (source_dir / "manifest.toml"):📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if path.name == "manifest.toml": | |
| logging.info("manifest.toml changed — re-syncing all files") | |
| if path == (source_dir / "manifest.toml"): | |
| logging.info("manifest.toml changed — re-syncing all files") |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cli/pages/engines/templates/copy_watcher.py` around lines 113 - 114, The
current check uses path.name == "manifest.toml" which matches nested files;
change it to only trigger when the manifest is the repository/watch root
manifest by comparing the full path to the root manifest (e.g., replace the
condition with path == root_dir / "manifest.toml" or path.parent == root_dir),
so the re-sync in the block that logs "manifest.toml changed — re-syncing all
files" only runs for the root manifest file.
docs: update architecture and modules docs for Argo migration
test: add tests for Argo migration and copy-watcher architecture
Summary by CodeRabbit
New Features
Bug Fixes & Improvements
Documentation