From 1a9a508f7aee972154edfe470914a07bf2d0762b Mon Sep 17 00:00:00 2001 From: Tobias Macey Date: Tue, 24 Mar 2026 14:14:42 -0400 Subject: [PATCH 1/4] feat: migrate to uv + add ContainerGrader for Kubernetes/Docker sandboxed grading (#14) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * chore: migrate from pip-compile to uv for dependency management - Run migrate-to-uv to bootstrap pyproject.toml from requirements/base.txt and requirements/test.txt - Add full project metadata: name, version, description, requires-python>=3.11, license, hatchling build backend, entry point xqueue-watcher -> manager:main - Add newrelic as [project.optional-dependencies.production] - Add dev dependency group: coverage, mock, pytest-cov - Remove setup.py (replaced by pyproject.toml) - Remove all requirements/*.in and requirements/*.txt files (14 files) - Generate uv.lock with pinned dependency graph - Update Makefile: replace pip/pip-compile targets with uv sync / uv run pytest - Update .github/workflows/ci.yml: use astral-sh/setup-uv@v4, drop ubuntu-20.04 and Python 3.8, add Python 3.13, update to actions/checkout@v4 - Replace upgrade-python-requirements workflow with uv lock --upgrade + create-pull-request workflow Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * fix: remove AppArmor/codejail hard dependency; make codejail optional - Remove six (Python 2 compat shim) from imports and SUPPORT_FILES in jailedgrader.py — Python 3 only going forward - Wrap codejail imports in try/except in jailedgrader.py and manager.py; raise RuntimeError with clear message directing users to ContainerGrader - Fix Path.abspath() -> Path.absolute() (breaking API change in path v17) in grader.py and jailedgrader.py - Update Dockerfile: ubuntu:xenial -> python:3.11-slim, remove apparmor and language-pack-en packages, fix layer ordering - Update test_codejail_config to use fork_per_item=False to avoid multiprocessing state-inheritance failure on Python 3.14 forkserver default - Update conf.d/600.json example to use ContainerGrader handler Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * feat: add ContainerGrader for Kubernetes/Docker-based sandboxed grading Adds xqueue_watcher/containergrader.py — a drop-in replacement for JailedGrader that executes student code inside an isolated container instead of using AppArmor/codejail. Security model (replaces AppArmor): - Container isolation (Linux namespaces + cgroups) - Non-root user (UID 1000), read-only root filesystem - CPU/memory resource limits enforced by container runtime - Network disabled for grader containers (no egress) - Hard wall-clock timeout via activeDeadlineSeconds (k8s) or timeout (Docker) Two pluggable backends selected via the 'backend' KWARGS option: kubernetes (default / production): - Creates a batch/v1 Job per submission using the kubernetes Python client - Auto-detects in-cluster vs kubeconfig credentials - Polls until Job completes, collects stdout from pod logs - Deletes the Job after result collection (ttlSecondsAfterFinished=300) - Job pod spec includes: securityContext, resource limits, activeDeadlineSeconds, and labels for observability docker (local dev / CI): - Runs a container via the docker Python SDK - Bind-mounts the grader directory read-only - Passes SUBMISSION_CODE as an environment variable - Network disabled, memory + CPU limits applied Student code is passed via SUBMISSION_CODE env var (avoids argv length limits and shell injection). The entrypoint writes it to /tmp before invoking grader_support.run, producing the same JSON output format that JailedGrader already expects — so no changes to grader test framework or course team grader code are required. Configuration example (conf.d/my-course.json): { "my-course": { "HANDLERS": [{ "HANDLER": "xqueue_watcher.containergrader.ContainerGrader", "KWARGS": { "grader_root": "/graders/my-course/", "image": "registry.example.com/my-course:latest", "backend": "kubernetes", "cpu_limit": "500m", "memory_limit": "256Mi", "timeout": 20 } }] } } Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * feat: add grader base Docker image and container entrypoint grader_support/Dockerfile.base: - python:3.11-slim base, non-root grader user (UID 1000) - Copies grader_support framework; installs path-py - ENTRYPOINT: python -m grader_support.entrypoint - /tmp volume for submission files (writable even with read-only root fs) - Course teams extend this image to add their deps and grader scripts grader_support/entrypoint.py: - Reads SUBMISSION_CODE env var, writes to /tmp/submission.py - Adds /tmp and cwd to sys.path, then delegates to grader_support.run - Prints JSON result to stdout (same schema JailedGrader already parses) grader_support/README.md: - Course team authoring guide: how to extend the base image, configure the handler, and understand the security properties Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * feat: add Kubernetes deployment manifests and Docker Compose local dev deploy/kubernetes/ (Kustomize-compatible): - serviceaccount.yaml — dedicated SA for xqueue-watcher pods - rbac.yaml — Role + RoleBinding: create/delete Jobs, read pod logs - configmap.yaml — watcher xqwatcher.json config (edit for your queues) - deployment.yaml — 2 replicas, topologySpreadConstraints, securityContext, resource limits, readinessProbe - networkpolicy.yaml — deny all ingress/egress on grader Job pods (label: role=grader-job); allow xqueue-watcher egress to xqueue - secret.yaml.template — placeholder: copy to secret.yaml, fill in credentials, do not commit secret.yaml (added to .gitignore) - kustomization.yaml — Kustomize entry point for the base directory docker-compose.yml (local dev): - xqueue-watcher container with docker socket access (for docker backend) - Mounts conf.d/ and grader directories - Includes a sample xqueue service reference for full local stack Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * fix: correct grader path handling in ContainerGrader and entrypoint ContainerGrader had two bugs affecting how grader files were located inside the container at runtime: 1. Docker backend bind-mounted the grader problem directory at /grader, overwriting the grader_support package that the base image copies there. Fixed by binding at /graders instead and passing the resulting absolute in-container path (/graders/) to the entrypoint. 2. Kubernetes backend set working_dir to the grader problem directory (e.g. /graders/ps07/Robot/), preventing Python from finding the grader_support package which lives at /grader/grader_support/. Fixed by keeping working_dir=/grader (the base image WORKDIR) and passing the absolute grader path in args instead of just the basename. entrypoint.py previously passed the full absolute path verbatim to __import__(), which fails for paths containing slashes. It now detects absolute paths, inserts the parent directory into sys.path, and uses only the basename as the importable module name. Also updates grader_support/README.md to document the correct layout (/graders/ for course grader scripts, /grader/ for grader_support) and the gradelib compatibility note for course teams migrating from Python 2 graders. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * fix(tests): skip jailed grader tests when codejail is not installed codejail is an optional dependency (not installed in CI). Guard the import with a try/except and apply @pytest.mark.skipif to the test class so collection succeeds and tests are skipped gracefully when codejail is absent. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * fix: address PR review feedback - Dockerfile: replace deleted requirements/ pip install with uv sync (copies uv binary from ghcr.io/astral-sh/uv and uses uv sync --frozen) - grader.py: guard against path traversal in grader_config['grader']; validate that the resolved grader path stays within grader_root - containergrader.py: fix Docker SDK TypeError - containers.run() does not accept a timeout kwarg; switch to detach=True + container.wait() to enforce the timeout, then collect logs and remove the container - containergrader.py: remove brittle hardcoded line numbers (L364, L379, L397, L450) from user-facing error messages - docker-compose.yml: change conf.d and data volumes from :ro to :rw so local edits take effect without rebuild (matches comment intent) - upgrade-python-requirements.yml: add explicit permissions block (contents: write, pull-requests: write) as required by security policy - Automated code Graders With xqueue-watcher.md: remove empty heading, add 'Property' header to comparison table Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * refactor: replace path-py with stdlib pathlib path-py is an external dependency that wraps pathlib with a fluent API. Since we now require Python >= 3.11, pathlib covers all the same functionality without an extra dependency. Changes: - Replace 'from path import Path' with 'from pathlib import Path' in all source and test files - .dirname() → .parent - .basename() → .name - .absolute() / .absolute() → .resolve() (symlink-safe) - .files('*.json') → .glob('*.json') (with sorted() for stable ordering) - Remove path-py (path-py / path) from pyproject.toml dependencies - Regenerate uv.lock (removes path==17.1.1 and path-py==12.5.0) - Simplify grader.py path-traversal check: now that grader_path is a native pathlib.Path, the inline 'import pathlib' is no longer needed - Fix test_grader.py mock: grader_path.endswith() → grader_path.name == - Fix test_manager.py: pass str() to argparse (Path is not subscriptable) Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * feat: add edx-codejail as optional dependency; document container isolation decision Add edx-codejail (the upstream PyPI package, v4.1.0) as an optional 'codejail' extra, replacing the previously pinned git-URL reference to a specific commit. uv add --optional codejail edx-codejail codejail is intentionally excluded from the base Docker image because ContainerGrader uses container-level isolation (Linux namespaces, cgroups, capability dropping, network isolation, read-only filesystem) which provides equivalent sandboxing to AppArmor without requiring host-level AppArmor configuration that is unavailable inside Kubernetes pods. Install the 'codejail' extra only when using the legacy JailedGrader on a bare-metal or VM host with AppArmor configured. To use: uv sync --extra codejail Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * fix: address second round of PR review feedback - Makefile: fix tab indentation on all recipe lines (was space-indented) - grader.py: remove unused sys import - jailedgrader.py: replace deprecated load_module() with spec_from_file_location/exec_module - containergrader.py: - remove unused imports (logging, os, tempfile) and _JOB_LABEL constant - add emptyDir volume at /tmp in K8s Job spec (required when read_only_root_filesystem=True) - add clarifying comment that K8s grader scripts are baked into the course image - replace deprecated load_module() with importlib.util spec/exec_module pattern - capture stderr from Docker container on non-zero exit for better diagnostics - grader_support/entrypoint.py: correct misleading comment about /tmp writability - deploy/kubernetes/deployment.yaml: fix command to use xqueue-watcher entry point - deploy/kubernetes/configmap.yaml: add xqueue-watcher-queue-configs ConfigMap so manifests apply cleanly out of the box - docker-compose.yml: mount Docker socket for docker backend to work - conf.d/600.json: use absolute /graders/ path instead of relative ../data path - Dockerfile: use C.UTF-8 locale (available without installing locales package) - pyproject.toml: add edx-codejail to dev group so jailed grader tests run in CI Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * refactor: move full grading pipeline into container; add ContainerGrader unit tests Architecture change: grader scripts are baked into the course-specific Docker image, so the watcher pod has no need to access grader files locally. The grader_support entrypoint now runs the complete grading pipeline inside the container (load grader, preprocess, run answer + submission, compare, return JSON grade), and ContainerGrader.grade() is simplified to just launch the container and parse its JSON output. Changes: - grader_support/entrypoint.py: complete rewrite; now takes GRADER_FILE SEED (not GRADER_FILE submission.py SEED); runs full grade pipeline in container; reads GRADER_LANGUAGE and HIDE_OUTPUT env vars from ContainerGrader - xqueue_watcher/containergrader.py: - Remove grader-module loading, gettext, answer.py reading, and all test- comparison logic from grade() — the container handles this now - grade() now just calls _run() and parses the returned JSON - _run() accepts grader_config and forwards lang/hide_output as env vars - _build_k8s_job(): args are now [grader_abs, seed] (not 3 args), adds GRADER_LANGUAGE and HIDE_OUTPUT env vars, still mounts emptyDir at /tmp - _run_docker(): same arg change; passes GRADER_LANGUAGE and HIDE_OUTPUT - ReadTimeout from container.wait() caught and re-raised as clear RuntimeError - Remove unused _truncate, _prepend_coding, importlib.util - tests/test_container_grader.py: 36 new unit tests covering: - _parse_cpu_millis - ContainerGrader init / backend validation - _build_k8s_job: args, env vars, resource limits, emptyDir/tmp, security - _run_docker: success, non-zero exit (with stderr), timeout, missing SDK - grade(): skip_grader, successful result, container failure, size warning Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * refactor: replace statsd/newrelic with OpenTelemetry; add 12-factor settings - Remove dogstatsd-python dependency; replace statsd instrumentation in grader.py with OpenTelemetry counters and a histogram - Add xqueue_watcher/metrics.py: configure_metrics() wires a MeterProvider with an OTLP HTTP exporter when OTEL_EXPORTER_OTLP_ENDPOINT is set; all four instruments (process_item, grader_payload_error, grading_time, replies) defined at module level against the global proxy meter - Call configure_metrics() from Manager.configure_from_directory() so the real provider is installed before any submissions are processed - Add xqueue_watcher/env_settings.py: get_manager_config_from_env() reads all manager config from XQWATCHER_* environment variables, compatible with 12-factor / Kubernetes deployment patterns - Remove newrelic from the production optional-dependency group and from the edx.org Dockerfile stage; the stage now runs xqueue-watcher directly - Add opentelemetry-api, opentelemetry-sdk, opentelemetry-exporter-otlp-proto-http to core dependencies; regenerate uv.lock - Add tests/test_env_settings.py and tests/test_metrics.py Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * chore: remove planning doc from git tracking Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * chore: remove codecov upload from CI Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * fix: address PR #14 review feedback - docker-compose.yml: remove unused GRADER_BACKEND env var, fix duplicate volumes key by merging into one list, tag sample-grader with image: grader-base:local so conf.d/600.json reference resolves - Dockerfile: standardise CMD config path to /etc/xqueue-watcher to match docker-compose and Kubernetes manifests - metrics.py: remove OTEL_METRIC_EXPORT_INTERVAL from docstring since it is not wired up in _build_meter_provider() - containergrader.py: add pod template metadata labels so the NetworkPolicy podSelector (app.kubernetes.io/component=xqueue-grader) actually matches grading pods; set automount_service_account_token=False on the grading pod spec to reduce blast radius if the NetworkPolicy is misconfigured; add _parse_memory_bytes() helper and use it for the Docker backend mem_limit so Kubernetes-style strings like '256Mi' are converted to bytes rather than passed raw (which Docker does not accept) Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * fix: add venv bin to PATH so xqueue-watcher entrypoint resolves uv installs the console script into the project virtual environment at .venv/bin/xqueue-watcher. Without adding this directory to PATH the CMD cannot be found at container startup. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * feat: add configure_logging() for 12-factor stdout logging When no logging.json file is present, manager.py now calls configure_logging() from env_settings instead of basicConfig(). configure_logging() sets up a single StreamHandler on stdout with a consistent timestamp/level/module format, honours XQWATCHER_LOG_LEVEL (default INFO), and suppresses noisy requests/urllib3 debug output. This removes the need for a logging.json file in Kubernetes deployments. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * fix: symlink xqueue-watcher into /usr/local/bin for reliable resolution Using PATH via ENV is fragile -- container runtimes and security policies can reset or ignore it. Install a symlink at /usr/local/bin/xqueue-watcher (always in the standard system PATH) so the entrypoint resolves regardless of how the container is launched. Also remove the stale NEW_RELIC_LICENSE_KEY env entry from the Kubernetes deployment manifest. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * feat: add env-based defaults for ContainerGrader configuration Add get_container_grader_defaults() to env_settings, reading five new XQWATCHER_GRADER_* env vars: XQWATCHER_GRADER_BACKEND (default: kubernetes) XQWATCHER_GRADER_NAMESPACE (default: default) XQWATCHER_GRADER_CPU_LIMIT (default: 500m) XQWATCHER_GRADER_MEMORY_LIMIT (default: 256Mi) XQWATCHER_GRADER_TIMEOUT (default: 20) ContainerGrader.__init__ now uses None sentinels for these params so that any value omitted from a conf.d KWARGS block falls back to the env-derived default rather than a hardcoded constant. Values supplied explicitly in conf.d always take precedence, preserving backwards compatibility. Also fixes duplicate function definitions that had crept into env_settings.py. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * feat(containergrader): add ImageDigestPoller and image pull policy support - Add ImageDigestPoller class: background daemon thread that periodically resolves a tag-based image reference to its current digest via docker.APIClient.inspect_distribution(). Thread-safe; falls back to the original reference if resolution fails. - Add image_pull_policy param to ContainerGrader (auto-detect: IfNotPresent for digest refs, Always for tag-based refs; can be overridden explicitly). - Add poll_image_digest and digest_poll_interval params to activate the poller. When enabled, Kubernetes Jobs use the most recently resolved repo@sha256:… reference via _effective_image(), ensuring nodes always run the latest pushed image without relying on imagePullPolicy: Always for every pod. - Add .github/workflows/publish-grader-base-image.yml to build and push grader_support/Dockerfile.base to ghcr.io/mitodl/xqueue-watcher-grader-base on push to master (grader_support/** paths), weekly schedule, and workflow_dispatch. Multi-platform linux/amd64,linux/arm64. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * fix: normalise imagePullPolicy to title-case before K8s API call Kubernetes requires imagePullPolicy to be exactly 'Always', 'IfNotPresent', or 'Never' (case-sensitive). When the value is supplied via KWARGS in the conf.d JSON (e.g. 'always' or 'ALWAYS'), the K8s API returns 422 Unprocessable Entity. Add a normalisation dict lookup that maps the lowercased input back to the canonical title-case form. Unknown values are passed through unchanged so Kubernetes can surface the validation error with a clear message. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * feat: add strip_path_components to ContainerGrader for legacy path prefixes LMS grader_payload 'grader' fields configured against the old git-clone deployment include a queue-name prefix, e.g.: mit-600x-Watcher-MITX-6.0001r/graders/python3graders/chips1/.../grade.py In the containerized approach, graders are baked directly into the image at grader_root, so the path resolves to: /graders/mit-600x-Watcher-MITX-6.0001r/graders/python3graders/... which doesn't exist. The actual path in the image is: /graders/python3graders/... Add strip_path_components (int, default 0) KWARG to ContainerGrader. When > 0, that many leading path components are stripped from the grader path (relative to grader_root) before it is passed as the container entrypoint argument. Set to 2 to remove both the queue-name component and the redundant repo subdirectory name. Example KWARGS: "strip_path_components": 2 Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * fix: install gettext into builtins before loading grader module Grader scripts may call _() at module level (e.g. in input_validators defined at import time). The previous code installed trans.install() after exec_module, causing NameError: name '_' is not defined. Move the entire locale/gettext setup block to before exec_module so _ is available in builtins when the grader script is first executed. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * fix: normalize mixed tab/space indentation before exec Python 3 raises TabError when exec'ing code with mixed tabs and spaces in the same indented block. Many course grader answer.py files were authored for Python 2 which tolerated this. Call expandtabs(4) on both the staff answer and student submission before preprocessing and writing to /tmp, so exec never sees raw tabs. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * fix: sys.path ordering so preprocessed answer/submission shadow originals run.py's import_captured() uses __import__() to load answer and submission modules. grader_dir was inserted into sys.path AFTER /tmp, making it position 0, so __import__('answer') found the original /graders/.../answer.py (with bare 'for c in s:') instead of the preprocessed /tmp/answer.py (with 'submission_code = repr(...)'). Fix: insert grader_dir first, then /tmp, so /tmp is position 0 and the preprocessed files always shadow the originals. Also: - Add _dbg() helper for debug tracing behind GRADER_DEBUG=1 env var; off by default so stderr output doesn't corrupt the JSON pod log that containergrader.py reads via read_namespaced_pod_log. - Import traceback (used by _dbg exception paths). Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * chore: log raw container output bytes on JSON parse failure Add an explicit ERROR-level log of the raw bytes (repr, up to 4096) when json.loads fails so we can see exactly what the pod log contains, including any leading/trailing garbage from stderr that Kubernetes combines into the pod log stream. Also add a DEBUG-level log of every container output for tracing. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * chore: push grader base image to DockerHub as well as GHCR Concourse grader-image pipelines use DockerHub as the trigger source. The workflow previously only pushed to GHCR, so Concourse never saw updates to the base image. Changes: - Add DockerHub login step (DOCKERHUB_USERNAME/DOCKERHUB_PASSWORD secrets) - Push to both mitodl/xqueue-watcher-grader-base (DockerHub) and ghcr.io/mitodl/xqueue-watcher-grader-base (GHCR) - Tag :latest on feature branches during active development so Concourse picks up fixes without waiting for master merge - Add feature branches to push trigger so grader_support fixes are published immediately Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * fix: request stdout-only stream from read_namespaced_pod_log The default stream parameter is 'All', which interleaves stderr into the returned string. Any stderr output from the container (Python warnings, import messages, etc.) corrupts the JSON that the entrypoint prints to stdout, causing JSONDecodeError in the watcher. Pass stream='Stdout' and container='grader' explicitly so only the container's stdout is returned. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * fix: extract last line from pod log instead of using stream param The PodLogsQuery feature gate (which enables the 'stream' field in PodLogOptions) is opt-in and disabled on the target cluster. Using stream= returns a 422 FieldValueForbidden error even on K8s 1.35. Instead, fetch the combined stdout+stderr log and scan backwards for the last non-empty line. The entrypoint always prints exactly one JSON object as its final output line, so this reliably extracts the result regardless of any stderr noise interleaved earlier in the log. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * fix: bypass kubernetes client JSON deserialisation of pod logs read_namespaced_pod_log returns response_type='str'. The client's deserialize() method first calls json.loads() on the raw response body (succeeds since the entrypoint outputs valid JSON), then passes the resulting Python dict to __deserialize_primitive(dict, str) which calls str(dict) — producing Python repr with single-quoted keys and True/False booleans, which is not valid JSON. Fix: pass _preload_content=False to get the raw urllib3 response object and read .data directly as bytes, bypassing the client deserialisation entirely. The raw bytes are valid UTF-8 JSON as printed by the entrypoint. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * chore: add top-level permissions: {} to restrict default GITHUB_TOKEN scope Addresses GitHub Advanced Security finding: 'Workflow does not contain permissions'. Adding a workflow-level permissions: {} block ensures the GITHUB_TOKEN has no default permissions; each job must explicitly declare what it needs. The update-dependencies job retains its required contents: write and pull-requests: write grants. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * refactor: remove strip_path_components from ContainerGrader strip_path_components was added to work around what turned out to be a configuration error in the LMS grader_payload, not a structural problem in the grading path resolution. Remove the parameter, its __init__ assignment, the stripping logic in _build_k8s_job, and all docstring references to keep the code simple and correct. Also addressed in this commit: - grader.py: downgrade per-submission grading-time log from INFO to DEBUG to avoid high-volume noise in production log streams - Dockerfile: pin uv to 0.10.7 via a named build stage instead of floating ghcr.io/astral-sh/uv:latest; replace the xqueue-watcher symlink with ENV PATH so the full venv is on PATH - env_settings.py: add XQWATCHER_DOCKER_HOST_GRADER_ROOT env var (preparation for docker_host_grader_root ContainerGrader param) Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * fix: add docker_host_grader_root; drop path-py from grader base image ContainerGrader (Docker backend): add docker_host_grader_root parameter so that when xqueue-watcher runs inside a container the bind-mount source path can be translated from the watcher-container path to the equivalent host-side path. Without this the Docker daemon (reached via the mounted socket) would look for the grader directory on the host where it does not exist. Defaults to XQWATCHER_DOCKER_HOST_GRADER_ROOT env var or None (watcher runs directly on the host, no translation needed). docker-compose.yml: add XQWATCHER_DOCKER_HOST_GRADER_ROOT placeholder and explanatory comment so operators know to set the absolute host path. grader_support/Dockerfile.base: remove the path-py pip install. The grader_support framework itself does not import path; course teams that need path-py can add it in their own downstream image. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * feat: add named xqueue server references via xqueue_servers.json Queue configs in conf.d can now use SERVER_REF to reference a named server defined in xqueue_servers.json, avoiding the need to embed XQueue URLs and credentials directly in grader configuration files. - settings.py: add get_xqueue_servers() to load and validate xqueue_servers.json from the config root - manager.py: load xqueue_servers.json in configure_from_directory(); resolve SERVER_REF in client_from_config(), raising ValueError for unknown names or conflicts with SERVER/AUTH - env_settings.py: document the Kubernetes Secret volume-mount pattern for xqueue_servers.json as the preferred credentials delivery mechanism - conf.d/600.json: update example to use SERVER_REF - tests: add ServerRefTests and TestGetXqueueServers covering resolution, error cases, and configure_from_directory integration - tests/fixtures/config/xqueue_servers.json: fixture server for tests - README.md: document SERVER_REF, xqueue_servers.json format, and Kubernetes Secret mounting pattern Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * chore: remove DockerHub push from grader base image workflow Only push to GHCR. Remove the DockerHub login step, DockerHub image reference from the metadata action, and the DOCKERHUB_USERNAME / DOCKERHUB_PASSWORD secret dependencies. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * fix: harden containergrader and XQueue client - Fix TLS certificate verification: replace hardcoded verify=False with a _VERIFY_TLS flag (default True). Operators can opt out via XQWATCHER_VERIFY_TLS=false for dev environments; a warning is logged when verification is disabled. - Remove credentials from logs: strip self.password from the debug login message and the login-retry error message in client.py. - Enforce hard submission size limit: reject submissions larger than XQWATCHER_SUBMISSION_SIZE_LIMIT bytes (default 1 MB) before launching a container. Prevents etcd object-size overflows and resource-exhaustion attacks via very large env vars. Keep the existing 32 KB warning for submissions that are large but within the limit. - Add seccomp RuntimeDefault profile to Kubernetes grading Jobs: applied at both the pod level (V1PodSecurityContext) and the container level (V1SecurityContext) to restrict the available syscall surface. - Add PID limit to grading container resource limits: caps the number of processes a grading container may create at 256, preventing fork-bomb attacks from affecting other node workloads. - Cap /tmp emptyDir at 50 Mi: adds size_limit='50Mi' to the emptyDir volume backing /tmp in grading pods, preventing disk-exhaustion attacks. - Add path traversal pre-check in grader.py: explicitly reject grader paths containing '..' components before Path.resolve() is called, removing symlink edge-cases that could bypass the relative_to() guard. - Update containergrader module docstring and env_settings docs to accurately describe the security posture and new env vars. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * fix: address PR #14 review feedback - Makefile: add missing tab indentation on help target recipe lines - grader_support/entrypoint.py: fix always-true EndTest check (use str(e).strip() not e is not None) - tests/test_env_settings.py: use clear=True in hermetic default-value tests - tests/test_metrics.py: use clear=True to prevent OTEL_ env vars bleeding in - xqueue_watcher/client.py: apply _VERIFY_TLS in _request() and _login(), not just put_result - xqueue_watcher/containergrader.py: - fix image repo parsing to handle registry:port/image:tag refs (rfind approach) - fix 'pods' → 'pids' container resource limit - lazy-init Kubernetes API clients once per instance (avoids per-submission config load) - xqueue_watcher/env_settings.py: parse HTTP_BASIC_AUTH into (username, password) tuple - xqueue_watcher/metrics.py: clarify OTEL_RESOURCE_ATTRIBUTES is parsed by SDK automatically Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --------- Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- .github/workflows/ci.yml | 20 +- .../workflows/publish-grader-base-image.yml | 68 ++ .../workflows/upgrade-python-requirements.yml | 46 +- .gitignore | 7 + Dockerfile | 54 +- Makefile | 56 +- README.md | 85 +- conf.d/600.json | 13 +- deploy/kubernetes/configmap.yaml | 103 +++ deploy/kubernetes/deployment.yaml | 93 +++ deploy/kubernetes/kustomization.yaml | 25 + deploy/kubernetes/networkpolicy.yaml | 19 + deploy/kubernetes/rbac.yaml | 37 + deploy/kubernetes/secret.yaml.template | 15 + deploy/kubernetes/serviceaccount.yaml | 8 + docker-compose.yml | 51 ++ grader_support/Dockerfile.base | 19 + grader_support/README.md | 106 +++ grader_support/entrypoint.py | 236 ++++++ load_test/run.py | 4 +- pyproject.toml | 42 + requirements/base.in | 10 - requirements/base.txt | 26 - requirements/ci.in | 6 - requirements/ci.txt | 66 -- requirements/common_constraints.txt | 20 - requirements/constraints.txt | 11 - requirements/pip.in | 7 - requirements/pip.txt | 16 - requirements/pip_tools.in | 4 - requirements/pip_tools.txt | 26 - requirements/production.in | 5 - requirements/production.txt | 36 - requirements/test.in | 8 - requirements/test.txt | 54 -- setup.py | 14 - tests/fixtures/config/xqueue_servers.json | 6 + tests/test_container_grader.py | 347 +++++++++ tests/test_env_settings.py | 219 ++++++ tests/test_grader.py | 8 +- tests/test_jailed_grader.py | 14 +- tests/test_manager.py | 95 ++- tests/test_metrics.py | 90 +++ uv.lock | 726 ++++++++++++++++++ xqueue_watcher/client.py | 20 +- xqueue_watcher/containergrader.py | 679 ++++++++++++++++ xqueue_watcher/env_settings.py | 256 ++++++ xqueue_watcher/grader.py | 33 +- xqueue_watcher/jailedgrader.py | 49 +- xqueue_watcher/manager.py | 64 +- xqueue_watcher/metrics.py | 78 ++ xqueue_watcher/settings.py | 23 + 52 files changed, 3659 insertions(+), 464 deletions(-) create mode 100644 .github/workflows/publish-grader-base-image.yml create mode 100644 deploy/kubernetes/configmap.yaml create mode 100644 deploy/kubernetes/deployment.yaml create mode 100644 deploy/kubernetes/kustomization.yaml create mode 100644 deploy/kubernetes/networkpolicy.yaml create mode 100644 deploy/kubernetes/rbac.yaml create mode 100644 deploy/kubernetes/secret.yaml.template create mode 100644 deploy/kubernetes/serviceaccount.yaml create mode 100644 docker-compose.yml create mode 100644 grader_support/Dockerfile.base create mode 100644 grader_support/README.md create mode 100644 grader_support/entrypoint.py create mode 100644 pyproject.toml delete mode 100644 requirements/base.in delete mode 100644 requirements/base.txt delete mode 100644 requirements/ci.in delete mode 100644 requirements/ci.txt delete mode 100644 requirements/common_constraints.txt delete mode 100644 requirements/constraints.txt delete mode 100644 requirements/pip.in delete mode 100644 requirements/pip.txt delete mode 100644 requirements/pip_tools.in delete mode 100644 requirements/pip_tools.txt delete mode 100644 requirements/production.in delete mode 100644 requirements/production.txt delete mode 100644 requirements/test.in delete mode 100644 requirements/test.txt delete mode 100644 setup.py create mode 100644 tests/fixtures/config/xqueue_servers.json create mode 100644 tests/test_container_grader.py create mode 100644 tests/test_env_settings.py create mode 100644 tests/test_metrics.py create mode 100644 uv.lock create mode 100644 xqueue_watcher/containergrader.py create mode 100644 xqueue_watcher/env_settings.py create mode 100644 xqueue_watcher/metrics.py diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index 90bcd21..8f0f876 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -16,19 +16,17 @@ jobs: matrix: os: - ubuntu-latest - python-version: ['3.12'] + python-version: ['3.12', '3.13'] steps: - uses: actions/checkout@v4 - - name: setup python - uses: actions/setup-python@v5 + + - name: Install uv + uses: astral-sh/setup-uv@v4 with: - python-version: ${{ matrix.python-version }} + enable-cache: true - - name: Install requirements and Run Tests - run: make test + - name: Set up Python ${{ matrix.python-version }} + run: uv python install ${{ matrix.python-version }} - - name: Run Coverage - uses: codecov/codecov-action@v4 - with: - token: ${{ secrets.CODECOV_TOKEN }} - fail_ci_if_error: true + - name: Run Tests + run: uv run --python ${{ matrix.python-version }} pytest tests diff --git a/.github/workflows/publish-grader-base-image.yml b/.github/workflows/publish-grader-base-image.yml new file mode 100644 index 0000000..3eaac4c --- /dev/null +++ b/.github/workflows/publish-grader-base-image.yml @@ -0,0 +1,68 @@ +name: Publish grader base image + +# Builds grader_support/Dockerfile.base and pushes to: +# - GHCR: ghcr.io/mitodl/xqueue-watcher-grader-base + +on: + push: + branches: + - master + - feat/xqwatcher-kubernetes-migration + - chore/migrate-to-uv-and-k8s-container-grader + paths: + - "grader_support/**" + schedule: + # Weekly rebuild to pick up base Python/OS security patches (Sunday 00:00 UTC) + - cron: "0 0 * * 0" + workflow_dispatch: + +env: + IMAGE_NAME: mitodl/xqueue-watcher-grader-base + +jobs: + build-and-push: + name: Build and push grader base image + runs-on: ubuntu-latest + permissions: + contents: read + packages: write + + steps: + - name: Checkout repository + uses: actions/checkout@v4 + + - name: Log in to GHCR + uses: docker/login-action@v3 + with: + registry: ghcr.io + username: ${{ github.actor }} + password: ${{ secrets.GITHUB_TOKEN }} + + - name: Set up QEMU (for multi-platform builds) + uses: docker/setup-qemu-action@v3 + + - name: Set up Docker Buildx + uses: docker/setup-buildx-action@v3 + + - name: Extract image metadata + id: meta + uses: docker/metadata-action@v5 + with: + images: | + ghcr.io/${{ env.IMAGE_NAME }} + tags: | + type=raw,value=latest,enable={{is_default_branch}} + type=raw,value=latest,enable=${{ github.ref_name == 'chore/migrate-to-uv-and-k8s-container-grader' || github.ref_name == 'feat/xqwatcher-kubernetes-migration' }} + type=sha,format=short + + - name: Build and push + uses: docker/build-push-action@v6 + with: + context: . + file: grader_support/Dockerfile.base + platforms: linux/amd64,linux/arm64 + push: true + tags: ${{ steps.meta.outputs.tags }} + labels: ${{ steps.meta.outputs.labels }} + cache-from: type=gha + cache-to: type=gha,mode=max diff --git a/.github/workflows/upgrade-python-requirements.yml b/.github/workflows/upgrade-python-requirements.yml index 0cadb2c..be2cc16 100644 --- a/.github/workflows/upgrade-python-requirements.yml +++ b/.github/workflows/upgrade-python-requirements.yml @@ -1,27 +1,33 @@ -name: Upgrade Python Requirements +name: Update Dependencies on: schedule: - cron: "15 15 1/14 * *" workflow_dispatch: - inputs: - branch: - description: "Target branch against which to create requirements PR" - required: true - default: 'master' + +permissions: {} jobs: - call-upgrade-python-requirements-workflow: - uses: openedx/.github/.github/workflows/upgrade-python-requirements.yml@master - with: - branch: ${{ github.event.inputs.branch || 'master' }} - # optional parameters below; fill in if you'd like github or email notifications - # user_reviewers: "" - # team_reviewers: "" - email_address: "aurora-requirements-update@2u-internal.opsgenie.net" - send_success_notification: true - secrets: - requirements_bot_github_token: ${{ secrets.REQUIREMENTS_BOT_GITHUB_TOKEN }} - requirements_bot_github_email: ${{ secrets.REQUIREMENTS_BOT_GITHUB_EMAIL }} - edx_smtp_username: ${{ secrets.EDX_SMTP_USERNAME }} - edx_smtp_password: ${{ secrets.EDX_SMTP_PASSWORD }} + update-dependencies: + runs-on: ubuntu-24.04 + permissions: + contents: write + pull-requests: write + steps: + - uses: actions/checkout@v4 + + - name: Install uv + uses: astral-sh/setup-uv@v4 + + - name: Update uv.lock + run: uv lock --upgrade + + - name: Create Pull Request + uses: peter-evans/create-pull-request@v6 + with: + token: ${{ secrets.REQUIREMENTS_BOT_GITHUB_TOKEN }} + commit-message: "chore: update uv.lock with latest dependency versions" + title: "chore: update dependencies" + body: "Automated dependency update via `uv lock --upgrade`." + branch: "chore/update-dependencies" + delete-branch: true diff --git a/.gitignore b/.gitignore index 9fc011c..d371a40 100644 --- a/.gitignore +++ b/.gitignore @@ -22,3 +22,10 @@ reports/ \#*\# *.egg-info .idea/ + +# uv +.venv/ + +# Kubernetes secrets — never commit real values +deploy/kubernetes/secret.yaml +Automated code Graders With xqueue-watcher.md diff --git a/Dockerfile b/Dockerfile index 89ee5bb..1570fa7 100644 --- a/Dockerfile +++ b/Dockerfile @@ -1,26 +1,46 @@ -FROM ubuntu:xenial as openedx +ARG UV_VERSION=0.10.7 +FROM ghcr.io/astral-sh/uv:${UV_VERSION} AS uv -RUN apt update && \ - apt install -y git-core language-pack-en apparmor apparmor-utils python python-pip python-dev && \ - pip install --upgrade pip setuptools && \ - rm -rf /var/lib/apt/lists/* +FROM python:3.11-slim AS base -RUN locale-gen en_US.UTF-8 -ENV LANG en_US.UTF-8 -ENV LANGUAGE en_US:en -ENV LC_ALL en_US.UTF-8 +ENV PYTHONDONTWRITEBYTECODE=1 \ + PYTHONUNBUFFERED=1 \ + LANG=C.UTF-8 \ + LC_ALL=C.UTF-8 + +RUN apt-get update && \ + apt-get install -y --no-install-recommends git-core && \ + rm -rf /var/lib/apt/lists/* + +RUN useradd -m --shell /bin/false app + +COPY --from=uv /uv /usr/local/bin/uv WORKDIR /edx/app/xqueue_watcher -COPY requirements /edx/app/xqueue_watcher/requirements -RUN pip install -r requirements/production.txt -CMD python -m xqueue_watcher -d /edx/etc/xqueue_watcher +COPY pyproject.toml uv.lock ./ +RUN uv sync --frozen --no-dev --no-install-project + +COPY . /edx/app/xqueue_watcher +RUN uv sync --frozen --no-dev +# Note: the `codejail` optional extra (edx-codejail) is intentionally omitted +# from this image. In the Kubernetes deployment, student code runs inside an +# isolated container (ContainerGrader) — the container boundary provides the +# sandbox via Linux namespaces, cgroups, capability dropping, network isolation, +# and a read-only filesystem. codejail (AppArmor + OS-level user-switching) +# requires host-level AppArmor configuration that is unavailable inside +# Kubernetes pods and adds no meaningful security benefit on top of container +# isolation. Install the `codejail` extra only when running the legacy +# JailedGrader on a bare-metal or VM host with AppArmor configured. + +# Put the venv on PATH so `xqueue-watcher` and any other installed scripts are +# available without a symlink. +ENV PATH="/edx/app/xqueue_watcher/.venv/bin:$PATH" -RUN useradd -m --shell /bin/false app USER app -COPY . /edx/app/xqueue_watcher +CMD ["xqueue-watcher", "-d", "/etc/xqueue-watcher"] -FROM openedx as edx.org -RUN pip install newrelic -CMD newrelic-admin run-program python -m xqueue_watcher -d /edx/etc/xqueue_watcher +FROM base AS edx.org +USER app +CMD ["xqueue-watcher", "-d", "/etc/xqueue-watcher"] diff --git a/Makefile b/Makefile index 8a66fde..e94662c 100644 --- a/Makefile +++ b/Makefile @@ -1,47 +1,29 @@ -NODE_BIN=./node_modules/.bin - help: - @echo ' ' - @echo 'Makefile for the xqueue-watcher ' - @echo ' ' - @echo 'Usage: ' - @echo ' make requirements install requirements for local development ' - @echo ' make test run python unit-tests ' - @echo ' make clean delete generated byte code and coverage reports ' - @echo ' ' - -COMMON_CONSTRAINTS_TXT=requirements/common_constraints.txt -.PHONY: $(COMMON_CONSTRAINTS_TXT) -$(COMMON_CONSTRAINTS_TXT): - wget -O "$(@)" https://raw.githubusercontent.com/edx/edx-lint/master/edx_lint/files/common_constraints.txt || touch "$(@)" - -upgrade: export CUSTOM_COMPILE_COMMAND=make upgrade -upgrade: $(COMMON_CONSTRAINTS_TXT) - ## update the requirements/*.txt files with the latest packages satisfying requirements/*.in - pip install -q -r requirements/pip_tools.txt - pip-compile --allow-unsafe --rebuild --upgrade -o requirements/pip.txt requirements/pip.in - pip-compile --upgrade -o requirements/pip_tools.txt requirements/pip_tools.in - pip install -q -r requirements/pip.txt - pip install -q -r requirements/pip_tools.txt - pip-compile --upgrade -o requirements/base.txt requirements/base.in - pip-compile --upgrade -o requirements/production.txt requirements/production.in - pip-compile --upgrade -o requirements/test.txt requirements/test.in - pip-compile --upgrade -o requirements/ci.txt requirements/ci.in + @echo '' + @echo 'Makefile for the xqueue-watcher' + @echo '' + @echo 'Usage:' + @echo ' make requirements sync dev dependencies with uv' + @echo ' make test run python unit-tests' + @echo ' make docker-build build the grader base Docker image' + @echo ' make local-run run locally with docker-compose' + @echo ' make clean delete generated byte code' + @echo '' requirements: - pip install -qr requirements/production.txt --exists-action w + uv sync -test.requirements: - pip install -q -r requirements/test.txt --exists-action w +test: requirements + uv run pytest --cov=xqueue_watcher --cov-report=xml tests -ci.requirements: - pip install -q -r requirements/ci.txt --exists-action w +docker-build: + docker build -t xqueue-watcher:local . + docker build -t grader-base:local -f grader_support/Dockerfile.base . -test: test.requirements - pytest --cov=xqueue_watcher --cov-report=xml tests +local-run: + docker compose up clean: find . -name '*.pyc' -delete -# Targets in a Makefile which do not produce an output file with the same name as the target name -.PHONY: help requirements clean +.PHONY: help requirements test docker-build local-run clean diff --git a/README.md b/README.md index 245e940..5ffe2a1 100644 --- a/README.md +++ b/README.md @@ -18,9 +18,10 @@ root/ │ ├── ... # xqueue-watcher repo, unchanged │ └── ... ├── config/ -│ └── conf.d/ +│ ├── conf.d/ │ │ └── my-course.json -│ └── logging.json +│ ├── logging.json +│ └── xqueue_servers.json # named server references (keep out of version control) └── my-course/ ├── exercise1/ │ ├── grader.py # - per-exercise grader @@ -46,6 +47,79 @@ Now you're ready to run it. python -m xqueue_watcher -d [path to the config directory, eg ../config] ``` +Named XQueue server references +------------------------------ + +XQueue server connection details (URL and credentials) can be defined once in +`xqueue_servers.json` and referenced by name from queue configs. This keeps +secrets out of course configuration files. + +**`config/xqueue_servers.json`** — define one or more named servers: +```json +{ + "default": { + "SERVER": "http://127.0.0.1:18040", + "AUTH": ["uname", "pwd"] + } +} +``` + +Queue configs in `conf.d` then use `SERVER_REF` instead of `SERVER`/`AUTH`: +```json +{ + "test-123": { + "SERVER_REF": "default", + "CONNECTIONS": 1, + "HANDLERS": [ + { + "HANDLER": "xqueue_watcher.grader.Grader", + "KWARGS": { + "grader_root": "/path/to/course/graders/" + } + } + ] + } +} +``` + +`SERVER_REF` and `SERVER`/`AUTH` are mutually exclusive — a `ValueError` is +raised at startup if both are present in the same queue config. + +Kubernetes +~~~~~~~~~~ + +`xqueue_servers.json` is designed to be delivered as a mounted Kubernetes +Secret, keeping credentials completely separate from the rest of the +configuration (which can live in a ConfigMap): + +```yaml +# Secret — holds credentials +apiVersion: v1 +kind: Secret +metadata: + name: xqueue-servers +stringData: + xqueue_servers.json: | + { + "default": { + "SERVER": "http://xqueue-svc:18040", + "AUTH": ["lms", "s3cr3t"] + } + } +``` + +```yaml +# Deployment — mount alongside the rest of the config +volumes: + - name: xqueue-servers + secret: + secretName: xqueue-servers +volumeMounts: + - name: xqueue-servers + mountPath: /config/xqueue_servers.json + subPath: xqueue_servers.json +``` + The course configuration JSON file in `conf.d` should have the following structure: ```json { @@ -57,7 +131,7 @@ The course configuration JSON file in `conf.d` should have the following structu { "HANDLER": "xqueue_watcher.grader.Grader", "KWARGS": { - "grader_root": "/path/to/course/graders/", + "grader_root": "/path/to/course/graders/" } } ] @@ -66,8 +140,9 @@ The course configuration JSON file in `conf.d` should have the following structu ``` * `test-123`: the name of the queue -* `SERVER`: XQueue server address -* `AUTH`: List containing [username, password] of XQueue Django user +* `SERVER`: XQueue server address (omit when using `SERVER_REF`) +* `AUTH`: List containing [username, password] of XQueue Django user (omit when using `SERVER_REF`) +* `SERVER_REF`: name of a server defined in `xqueue_servers.json` (alternative to `SERVER`/`AUTH`) * `CONNECTIONS`: how many threads to spawn to watch the queue * `HANDLERS`: list of callables that will be called for each queue submission * `HANDLER`: callable name, see below for Submissions Handler diff --git a/conf.d/600.json b/conf.d/600.json index 10565e8..6866570 100644 --- a/conf.d/600.json +++ b/conf.d/600.json @@ -1,14 +1,17 @@ { "test-123": { - "SERVER": "http://127.0.0.1:18040", + "SERVER_REF": "default", "CONNECTIONS": 1, - "AUTH": ["lms", "lms"], "HANDLERS": [ { - "HANDLER": "xqueue_watcher.grader.Grader", + "HANDLER": "xqueue_watcher.containergrader.ContainerGrader", "KWARGS": { - "grader_root": "../data/6.00x/graders/", - "gradepy": "../data/6.00x/graders/grade.py" + "grader_root": "/graders/", + "image": "grader-base:local", + "backend": "docker", + "cpu_limit": "500m", + "memory_limit": "256Mi", + "timeout": 20 } } ] diff --git a/deploy/kubernetes/configmap.yaml b/deploy/kubernetes/configmap.yaml new file mode 100644 index 0000000..b910827 --- /dev/null +++ b/deploy/kubernetes/configmap.yaml @@ -0,0 +1,103 @@ +apiVersion: v1 +kind: ConfigMap +metadata: + name: xqueue-watcher-config + namespace: xqueue-watcher + labels: + app.kubernetes.io/name: xqueue-watcher + app.kubernetes.io/component: watcher +data: + # Main watcher settings. See xqueue_watcher/settings.py for all keys. + xqwatcher.json: | + { + "POLL_INTERVAL": 1, + "LOGIN_POLL_INTERVAL": 5, + "REQUESTS_TIMEOUT": 5, + "POLL_TIME": 10 + } + + # Logging configuration + logging.json: | + { + "version": 1, + "disable_existing_loggers": false, + "formatters": { + "standard": { + "format": "%(asctime)s %(levelname)s %(name)s %(message)s" + } + }, + "handlers": { + "console": { + "class": "logging.StreamHandler", + "formatter": "standard", + "stream": "ext://sys.stdout" + } + }, + "root": { + "handlers": ["console"], + "level": "INFO" + } + } + + # Example queue config — copy this pattern for each course queue. + # Real configs live in conf.d/ mounted from a separate ConfigMap or Secret. + example-queue.json.sample: | + { + "my-course-queue": { + "SERVER": "http://xqueue:18040", + "CONNECTIONS": 2, + "AUTH": ["xqueue_user", "xqueue_pass"], + "HANDLERS": [ + { + "HANDLER": "xqueue_watcher.containergrader.ContainerGrader", + "KWARGS": { + "grader_root": "/graders/my-course/", + "image": "registry.example.com/my-course-grader:latest", + "backend": "kubernetes", + "namespace": "xqueue-watcher", + "cpu_limit": "500m", + "memory_limit": "256Mi", + "timeout": 20 + } + } + ] + } + } +--- +# Queue-specific configurations: one JSON file per course queue. +# Operators replace or extend this with real queue names, server URLs, +# and grader images. AUTH credentials should be injected from a Secret +# (e.g., via Vault Secrets Operator) rather than stored in this ConfigMap. +apiVersion: v1 +kind: ConfigMap +metadata: + name: xqueue-watcher-queue-configs + namespace: xqueue-watcher + labels: + app.kubernetes.io/name: xqueue-watcher + app.kubernetes.io/component: queue-config +data: + # Replace with your actual queue configs. Each key becomes a file in + # /etc/xqueue-watcher/conf.d/ and must end in .json to be picked up. + example-queue.json: | + { + "my-course-queue": { + "SERVER": "http://xqueue:18040", + "CONNECTIONS": 2, + "AUTH": ["xqueue_user", "xqueue_pass"], + "HANDLERS": [ + { + "HANDLER": "xqueue_watcher.containergrader.ContainerGrader", + "KWARGS": { + "grader_root": "/graders/my-course/", + "image": "registry.example.com/my-course-grader:latest", + "backend": "kubernetes", + "namespace": "xqueue-watcher", + "cpu_limit": "500m", + "memory_limit": "256Mi", + "timeout": 20 + } + } + ] + } + } diff --git a/deploy/kubernetes/deployment.yaml b/deploy/kubernetes/deployment.yaml new file mode 100644 index 0000000..df736ac --- /dev/null +++ b/deploy/kubernetes/deployment.yaml @@ -0,0 +1,93 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: xqueue-watcher + namespace: xqueue-watcher + labels: + app.kubernetes.io/name: xqueue-watcher + app.kubernetes.io/component: watcher +spec: + # Scale horizontally by increasing replicas. Each replica polls xqueue + # independently — no coordination is required between replicas. + replicas: 2 + selector: + matchLabels: + app.kubernetes.io/name: xqueue-watcher + app.kubernetes.io/component: watcher + template: + metadata: + labels: + app.kubernetes.io/name: xqueue-watcher + app.kubernetes.io/component: watcher + spec: + serviceAccountName: xqueue-watcher + + # Spread replicas across nodes for availability + topologySpreadConstraints: + - maxSkew: 1 + topologyKey: kubernetes.io/hostname + whenUnsatisfiable: ScheduleAnyway + labelSelector: + matchLabels: + app.kubernetes.io/name: xqueue-watcher + + containers: + - name: xqueue-watcher + image: registry.example.com/xqueue-watcher:latest + imagePullPolicy: Always + command: ["xqueue-watcher", "-d", "/etc/xqueue-watcher"] + + resources: + requests: + cpu: "100m" + memory: "128Mi" + limits: + cpu: "500m" + memory: "512Mi" + + volumeMounts: + # Main watcher config (xqwatcher.json + logging.json) + - name: config + mountPath: /etc/xqueue-watcher + readOnly: true + # Queue-specific conf.d configs (one JSON file per course queue) + - name: queue-configs + mountPath: /etc/xqueue-watcher/conf.d + readOnly: true + + securityContext: + allowPrivilegeEscalation: false + readOnlyRootFilesystem: true + runAsNonRoot: true + runAsUser: 1000 + capabilities: + drop: ["ALL"] + + # Basic liveness: the process exits on fatal errors, so k8s will restart it. + # A more sophisticated probe could hit a /healthz endpoint if one is added. + livenessProbe: + exec: + command: ["python", "-c", "import xqueue_watcher"] + initialDelaySeconds: 10 + periodSeconds: 30 + failureThreshold: 3 + + volumes: + - name: config + configMap: + name: xqueue-watcher-config + items: + - key: xqwatcher.json + path: xqwatcher.json + - key: logging.json + path: logging.json + - name: queue-configs + # Queue-specific configs live in a separate ConfigMap so course teams + # can update them independently of the main watcher config. + # Replace with a Secret if the configs contain credentials. + # In practice, AUTH credentials should come from a Secret and be + # mounted separately or passed as environment variables. + configMap: + name: xqueue-watcher-queue-configs + + restartPolicy: Always diff --git a/deploy/kubernetes/kustomization.yaml b/deploy/kubernetes/kustomization.yaml new file mode 100644 index 0000000..23b7ddc --- /dev/null +++ b/deploy/kubernetes/kustomization.yaml @@ -0,0 +1,25 @@ +apiVersion: kustomize.config.k8s.io/v1beta1 +kind: Kustomization + +namespace: xqueue-watcher + +resources: + - serviceaccount.yaml + - rbac.yaml + - configmap.yaml + - deployment.yaml + - networkpolicy.yaml + +# Override the image tag for a specific environment by adding a patch in an +# overlay directory. Example overlay (deploy/kubernetes/overlays/production/): +# +# kustomization.yaml: +# resources: +# - ../../ +# images: +# - name: registry.example.com/xqueue-watcher +# newTag: "v1.2.3" +# +images: + - name: registry.example.com/xqueue-watcher + newTag: latest diff --git a/deploy/kubernetes/networkpolicy.yaml b/deploy/kubernetes/networkpolicy.yaml new file mode 100644 index 0000000..c591138 --- /dev/null +++ b/deploy/kubernetes/networkpolicy.yaml @@ -0,0 +1,19 @@ +# Deny all egress from grading Job pods. +# xqueue-watcher pods themselves still need egress to reach the xqueue server. +apiVersion: networking.k8s.io/v1 +kind: NetworkPolicy +metadata: + name: deny-grader-egress + namespace: xqueue-watcher + labels: + app.kubernetes.io/name: xqueue-watcher + app.kubernetes.io/component: network-policy +spec: + podSelector: + matchLabels: + app.kubernetes.io/component: xqueue-grader + policyTypes: + - Egress + # No egress rules = deny all outbound traffic from grader pods. + # Student code cannot make network calls, exfiltrate data, or reach external services. + egress: [] diff --git a/deploy/kubernetes/rbac.yaml b/deploy/kubernetes/rbac.yaml new file mode 100644 index 0000000..5a18123 --- /dev/null +++ b/deploy/kubernetes/rbac.yaml @@ -0,0 +1,37 @@ +apiVersion: rbac.authorization.k8s.io/v1 +kind: Role +metadata: + name: xqueue-watcher-grader + namespace: xqueue-watcher + labels: + app.kubernetes.io/name: xqueue-watcher + app.kubernetes.io/component: watcher +rules: + # Create and manage grading Jobs + - apiGroups: ["batch"] + resources: ["jobs"] + verbs: ["create", "get", "list", "watch", "delete"] + # Read pod logs to collect grading results + - apiGroups: [""] + resources: ["pods"] + verbs: ["get", "list", "watch"] + - apiGroups: [""] + resources: ["pods/log"] + verbs: ["get"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + name: xqueue-watcher-grader + namespace: xqueue-watcher + labels: + app.kubernetes.io/name: xqueue-watcher + app.kubernetes.io/component: watcher +subjects: + - kind: ServiceAccount + name: xqueue-watcher + namespace: xqueue-watcher +roleRef: + kind: Role + apiGroup: rbac.authorization.k8s.io + name: xqueue-watcher-grader diff --git a/deploy/kubernetes/secret.yaml.template b/deploy/kubernetes/secret.yaml.template new file mode 100644 index 0000000..0a6fe3c --- /dev/null +++ b/deploy/kubernetes/secret.yaml.template @@ -0,0 +1,15 @@ +# Secret template — do NOT commit real values. +# Copy this to secret.yaml.local (gitignored) and fill in real values, +# or provision via your secrets management tool (Vault, AWS Secrets Manager, etc.) +apiVersion: v1 +kind: Secret +metadata: + name: xqueue-watcher-secrets + namespace: xqueue-watcher + labels: + app.kubernetes.io/name: xqueue-watcher + app.kubernetes.io/component: watcher +type: Opaque +stringData: + # New Relic license key (only required if using the production newrelic extra) + new-relic-license-key: "REPLACE_ME" diff --git a/deploy/kubernetes/serviceaccount.yaml b/deploy/kubernetes/serviceaccount.yaml new file mode 100644 index 0000000..1abe93b --- /dev/null +++ b/deploy/kubernetes/serviceaccount.yaml @@ -0,0 +1,8 @@ +apiVersion: v1 +kind: ServiceAccount +metadata: + name: xqueue-watcher + namespace: xqueue-watcher + labels: + app.kubernetes.io/name: xqueue-watcher + app.kubernetes.io/component: watcher diff --git a/docker-compose.yml b/docker-compose.yml new file mode 100644 index 0000000..ad5fae5 --- /dev/null +++ b/docker-compose.yml @@ -0,0 +1,51 @@ +services: + # xqueue: the submission queue that xqueue-watcher polls. + # Uses the official Open edX xqueue image. + xqueue: + image: openedx/xqueue:latest + ports: + - "18040:18040" + environment: + DJANGO_SETTINGS_MODULE: xqueue.settings.devstack + XQUEUE_DJANGO_SECRET_KEY: dev-secret-key + healthcheck: + test: ["CMD", "curl", "-sf", "http://localhost:18040/xqueue/status/"] + interval: 10s + timeout: 5s + retries: 5 + + # xqueue-watcher: polls xqueue and routes submissions to the grader. + xqueue-watcher: + build: + context: . + dockerfile: Dockerfile + depends_on: + xqueue: + condition: service_healthy + volumes: + # Mount the local conf.d so you can edit queue configs without rebuilding. + - ./conf.d:/etc/xqueue-watcher/conf.d:rw + # Mount local grader scripts for rapid iteration. + - ./data:/graders:rw + # Give xqueue-watcher access to the Docker socket so it can spawn grader containers. + - /var/run/docker.sock:/var/run/docker.sock + environment: + # The Docker daemon interprets bind-mount sources relative to the *host* + # filesystem, not the watcher container. Set this to the absolute host-side + # path that corresponds to /graders inside this container (i.e. the absolute + # path of ./data on your machine). + XQWATCHER_DOCKER_HOST_GRADER_ROOT: /absolute/host/path/to/data + extra_hosts: + - "host.docker.internal:host-gateway" + command: xqueue-watcher -d /etc/xqueue-watcher + + # sample-grader: an example grader image for local testing. + # Course teams replace this with their own image. + sample-grader: + build: + context: . + dockerfile: grader_support/Dockerfile.base + image: grader-base:local + # This service is not started automatically — it exists so `docker compose build` + # builds the base image that course grader images extend. + profiles: ["build-only"] diff --git a/grader_support/Dockerfile.base b/grader_support/Dockerfile.base new file mode 100644 index 0000000..58e3690 --- /dev/null +++ b/grader_support/Dockerfile.base @@ -0,0 +1,19 @@ +FROM python:3.11-slim AS grader-base + +# Create a non-root user for running student code +RUN useradd -m -u 1000 --shell /bin/false grader + +WORKDIR /grader + +# Copy the grader_support framework into the image +COPY grader_support/ ./grader_support/ + +# /tmp is always writable (tmpfs) even with read_only_root_filesystem=true. +# Student submission files are written there by the entrypoint. +VOLUME ["/tmp"] + +USER grader + +# The entrypoint reads SUBMISSION_CODE from the environment, writes it to /tmp, +# then invokes grader_support.run and prints JSON results to stdout. +ENTRYPOINT ["python", "-m", "grader_support.entrypoint"] diff --git a/grader_support/README.md b/grader_support/README.md new file mode 100644 index 0000000..f15b4f2 --- /dev/null +++ b/grader_support/README.md @@ -0,0 +1,106 @@ +## Grader Base Image + +This Dockerfile builds the base image that all course-specific grader images extend. + +### What it contains + +- Python 3.11 (slim) +- The `grader_support` package (test framework and runner used by all graders) at `/grader/grader_support/` +- A non-root `grader` user (UID 1000) +- An entrypoint that reads student submissions from the `SUBMISSION_CODE` environment variable + +### Building + +```bash +docker build -t grader-base:latest -f grader_support/Dockerfile.base . +``` + +Or via the Makefile: + +```bash +make docker-build +``` + +### Course team usage + +Course teams create their own image `FROM grader-base` and add their grader scripts plus any Python dependencies required by the graders. + +#### Directory layout inside the container + +``` +/grader/ ← WORKDIR (base image); grader_support lives here +└── grader_support/ ← test framework (gradelib, run, entrypoint) + +/graders/ ← course grader scripts (course team copies these) +└── ps01/ +│ └── Problem1/ +│ ├── grade_Problem1.py ← grader script (defines `grader = Grader()`) +│ └── answer.py ← reference solution +└── ml/ + └── cluster/ + └── grade_cluster.py +``` + +`grader_root` in the handler config should point to `/graders/` (or a subdirectory of it). The `SUBMISSION_CODE` env var carries student code; the entrypoint writes it to `/tmp` (a writable tmpfs even when the root filesystem is read-only). + +#### Example course Dockerfile + +```dockerfile +# syntax=docker/dockerfile:1 +ARG GRADER_BASE_IMAGE=ghcr.io/mitodl/xqueue-watcher-grader-base:latest +FROM ${GRADER_BASE_IMAGE} + +# pip must run as root; the base image ends with USER grader. +USER root +RUN pip install --no-cache-dir numpy==1.26.4 scipy==1.13.0 + +# Copy grader scripts to /graders/. Do NOT copy them to /grader/ — that +# would overwrite the grader_support package from the base image. +COPY --chown=grader:grader graders/ /graders/ + +USER grader +``` + +#### Example handler config (`conf.d/my-course.json`) + +```json +{ + "my-course-queue": { + "SERVER": "http://xqueue:18040", + "CONNECTIONS": 2, + "AUTH": ["lms", "lms"], + "HANDLERS": [ + { + "HANDLER": "xqueue_watcher.containergrader.ContainerGrader", + "KWARGS": { + "grader_root": "/graders/", + "image": "registry.example.com/my-course-grader:latest", + "backend": "kubernetes", + "cpu_limit": "500m", + "memory_limit": "256Mi", + "timeout": 20 + } + } + ] + } +} +``` + +The `grader` field inside each xqueue submission payload should be a path **relative to `grader_root`**, e.g. `"ps01/Problem1/grade_Problem1.py"`. + +### Security properties + +Grader containers run with: +- Non-root user (UID 1000) +- Read-only root filesystem (`/tmp` is a tmpfs for submission files) +- No network access (`network_disabled: true` / Kubernetes NetworkPolicy) +- CPU and memory limits enforced by the container runtime +- Hard wall-clock timeout via `activeDeadlineSeconds` (Kubernetes) or `timeout` (Docker) + +### Important: `gradelib` compatibility + +The `grader_support/__init__.py` injects the framework's Python 3 `gradelib` and +`graderutil` modules into `sys.modules` before any grader file is imported. This +means grader scripts that do `from gradelib import *` receive the framework version +automatically, even if a legacy `gradelib.py` exists elsewhere on disk. Course teams +do not need to ship their own copy of `gradelib.py`. diff --git a/grader_support/entrypoint.py b/grader_support/entrypoint.py new file mode 100644 index 0000000..29bb6b5 --- /dev/null +++ b/grader_support/entrypoint.py @@ -0,0 +1,236 @@ +""" +Entrypoint for running the complete grading pipeline inside a container. + +The grader scripts (grader file, answer.py) are baked into this image. +This module reads SUBMISSION_CODE from the environment, runs both the staff +answer and the student submission through the grader, compares results, and +prints the final grade as JSON to stdout. + +Usage (set by Dockerfile ENTRYPOINT): + python -m grader_support.entrypoint GRADER_FILE SEED +""" + +import importlib.util +import json +import os +import sys +import traceback + +_DEBUG = os.environ.get("GRADER_DEBUG", "").lower() in ("1", "true", "yes") + + +def _dbg(*args): + """Print debug info to stderr when GRADER_DEBUG=1. + + Kubernetes reads pod logs via read_namespaced_pod_log which captures both + stdout and stderr. Keep this off by default so the JSON on stdout is the + only output in the pod log that containergrader.py needs to parse. + """ + if _DEBUG: + print("[DEBUG entrypoint]", *args, file=sys.stderr, flush=True) + + +def main(): + if len(sys.argv) != 3: + print( + "Usage: python -m grader_support.entrypoint GRADER_FILE SEED", + file=sys.stderr, + ) + sys.exit(1) + + grader_path = sys.argv[1] + seed = int(sys.argv[2]) + submission_code = os.environ.get("SUBMISSION_CODE", "") + + _dbg(f"grader_path={grader_path!r} seed={seed}") + _dbg(f"submission_code ({len(submission_code)} chars): {submission_code[:120]!r}") + + results = {"errors": [], "tests": [], "correct": False, "score": 0} + + # Install gettext into builtins BEFORE loading the grader module. + # Grader scripts may call _() at module level (e.g. in input_validators), + # so _ must be available before exec_module runs. + import gettext + lang = os.environ.get("GRADER_LANGUAGE", "en") + grader_dir = os.path.dirname(os.path.abspath(grader_path)) + locale_dir = os.path.join(grader_dir, "conf", "locale") + _dbg(f"grader_dir={grader_dir!r} locale_dir={locale_dir!r}") + trans = gettext.translation( + "graders", localedir=locale_dir, fallback=True, languages=[lang] + ) + trans.install(names=None) + _dbg("gettext installed") + + # Load the grader module to access test definitions, preprocessors, and + # input validators. The grader script is baked into this image. + _dbg(f"loading grader module from {grader_path!r}") + try: + spec = importlib.util.spec_from_file_location("grader_module", grader_path) + grader_module_obj = importlib.util.module_from_spec(spec) + spec.loader.exec_module(grader_module_obj) + grader = grader_module_obj.grader + _dbg(f"grader module loaded OK, tests={len(list(grader.tests()))}") + except Exception: + _dbg("EXCEPTION loading grader module:") + traceback.print_exc(file=sys.stderr) + raise + + # Validate submission format before doing any work. + _dbg("checking input_errors") + try: + errors = grader.input_errors(submission_code) + except Exception: + _dbg("EXCEPTION in input_errors:") + traceback.print_exc(file=sys.stderr) + raise + if errors: + _dbg(f"input_errors returned: {errors}") + results["errors"].extend(errors) + print(json.dumps(results)) + return + _dbg("input_errors: none") + + # Preprocess both the staff answer and the student submission. + answer_path = os.path.join(grader_dir, "answer.py") + _dbg(f"reading answer from {answer_path!r}") + with open(answer_path, "rb") as f: + answer = f.read().decode("utf-8") + _dbg(f"answer ({len(answer)} chars): {answer[:200]!r}") + + # Normalize tabs to spaces before preprocessing. Many course grader files + # were authored for Python 2 which tolerated mixed tab/space indentation; + # Python 3's exec raises TabError on such code. + answer = answer.expandtabs(4) + submission_code = submission_code.expandtabs(4) + + processed_answer = "# coding: utf8\n" + grader.preprocess(answer) + processed_submission = "# coding: utf8\n" + grader.preprocess(submission_code) + _dbg(f"processed_answer ({len(processed_answer)} chars): {processed_answer[:300]!r}") + _dbg(f"processed_submission ({len(processed_submission)} chars): {processed_submission[:300]!r}") + + # Write to /tmp, which is backed by an emptyDir volume mount in Kubernetes + # (readOnlyRootFilesystem=True prevents writes to the root FS). + with open("/tmp/answer.py", "w", encoding="utf-8") as f: + f.write(processed_answer) + with open("/tmp/submission.py", "w", encoding="utf-8") as f: + f.write(processed_submission) + _dbg("wrote /tmp/answer.py and /tmp/submission.py") + + # Make /tmp and the grader directory importable so run.py can find them. + # /tmp must come BEFORE grader_dir: the preprocessed answer.py and + # submission.py in /tmp must shadow the original source files in grader_dir. + sys.path.insert(0, grader_dir) + sys.path.insert(0, "/tmp") + _dbg(f"sys.path[:4]={sys.path[:4]}") + + from . import run as run_module + from .gradelib import EndTest + + grader_name = os.path.splitext(os.path.basename(grader_path))[0] + _dbg(f"grader_name={grader_name!r}") + + # Run the staff answer first to get expected outputs. + _dbg("running staff answer") + expected_output = run_module.run(grader_name, "answer", seed) + _dbg(f"expected_output grader status={expected_output['grader']['status']!r}" + f" submission status={expected_output['submission']['status']!r}" + f" exceptions={expected_output['exceptions']}" + f" results_count={len(expected_output['results'])}") + if expected_output["grader"].get("exception"): + _dbg(f"grader exception:\n{expected_output['grader']['exception']}") + if expected_output["submission"].get("exception"): + _dbg(f"answer exception:\n{expected_output['submission']['exception']}") + + expected_ok = ( + not expected_output["exceptions"] + and expected_output["grader"]["status"] == "ok" + and expected_output["submission"]["status"] == "ok" + ) + if not expected_ok: + _dbg("expected_ok=False → returning staff-solution error") + results["errors"].append( + "There was a problem running the staff solution (Staff debug)." + ) + print(json.dumps(results)) + return + + # Run the student submission. + _dbg("running student submission") + actual_output = run_module.run(grader_name, "submission", seed) + _dbg(f"actual_output grader status={actual_output['grader']['status']!r}" + f" submission status={actual_output['submission']['status']!r}" + f" exceptions={actual_output['exceptions']}" + f" results_count={len(actual_output['results'])}") + if actual_output["submission"].get("exception"): + _dbg(f"submission exception:\n{actual_output['submission']['exception']}") + + actual_ok = actual_output["grader"]["status"] == "ok" + + if actual_output["submission"]["status"] != "ok": + shown_error = actual_output["submission"].get("exception") or ( + "There was an error thrown while running your solution." + ) + results["errors"].append(shown_error) + actual_ok = False + + if not actual_ok: + results["errors"].append("We couldn't run your solution (Staff debug).") + print(json.dumps(results)) + return + + # Compare test results. + expected_results = expected_output["results"] + actual_results = actual_output["results"] + + if len(expected_results) != len(actual_results): + results["errors"].append( + "Something went wrong: different numbers of tests ran for " + "your code and for our reference code." + ) + print(json.dumps(results)) + return + + hide_output = os.environ.get("HIDE_OUTPUT", "").lower() in ("1", "true", "yes") + TOO_LONG = 5000 + corrects = [] + + for test, exp, act in zip(grader.tests(), expected_results, actual_results): + exp_short, exp_long, exp_out = exp + act_short, act_long, act_out = act + + if exp_short != act_short: + results["errors"].append("Something went wrong: tests don't match up.") + print(json.dumps(results)) + return + + if len(act_out) > TOO_LONG: + act_out = act_out[:TOO_LONG] + "...OUTPUT TRUNCATED" + + try: + correct = test.compare_results(exp_out, act_out) + except EndTest as e: + if str(e).strip(): + act_out += f"\n*** ERROR: {e} ***" + correct = False + + corrects.append(correct) + if not hide_output: + results["tests"].append( + (exp_short, exp_long, correct, exp_out, act_out) + ) + + n = len(corrects) + results["correct"] = all(corrects) and n > 0 + results["score"] = float(sum(corrects)) / n if n > 0 else 0 + + if n == 0 and not results["errors"]: + results["errors"] = [ + "There was a problem while running your code (Staff debug). " + "Please contact the course staff for assistance." + ] + + print(json.dumps(results)) + + +if __name__ == "__main__": + main() diff --git a/load_test/run.py b/load_test/run.py index dc10d98..1eb6027 100644 --- a/load_test/run.py +++ b/load_test/run.py @@ -7,7 +7,7 @@ import json import tempfile import getpass -from path import Path +from pathlib import Path import pprint import argparse @@ -20,7 +20,7 @@ { "HANDLER": "xqueue_watcher.jailedgrader.JailedGrader", "KWARGS": { - "grader_root": Path(__file__).dirname() / "../../data/6.00x/graders/", + "grader_root": Path(__file__).parent / "../../data/6.00x/graders/", } } ] diff --git a/pyproject.toml b/pyproject.toml new file mode 100644 index 0000000..5b19727 --- /dev/null +++ b/pyproject.toml @@ -0,0 +1,42 @@ +[build-system] +requires = ["hatchling"] +build-backend = "hatchling.build" + +[project] +name = "xqueue-watcher" +version = "0.3" +description = "XQueue Pull Grader" +readme = "README.md" +requires-python = ">=3.11" +license = { file = "LICENSE.TXT" } +dependencies = [ + "docker>=7.0.0", + "kubernetes>=29.0.0", + "opentelemetry-api", + "opentelemetry-exporter-otlp-proto-http", + "opentelemetry-sdk", + "requests", +] + +[project.optional-dependencies] +codejail = [ + "edx-codejail", +] + +[project.scripts] +xqueue-watcher = "xqueue_watcher.manager:main" + +[dependency-groups] +dev = [ + "coverage", + "edx-codejail", + "mock", + "pytest-cov", +] + +[tool.uv] +package = true +default-groups = ["dev"] + +[tool.hatch.build.targets.wheel] +packages = ["xqueue_watcher", "grader_support"] diff --git a/requirements/base.in b/requirements/base.in deleted file mode 100644 index b8c4b5d..0000000 --- a/requirements/base.in +++ /dev/null @@ -1,10 +0,0 @@ -# Core requirements for using this package - --c constraints.txt - -dogstatsd-python -path.py -requests -six - --e git+https://github.com/openedx/codejail.git@4127fc4bd5775cc72aee8d7f0a70e31405e22439#egg=codejail diff --git a/requirements/base.txt b/requirements/base.txt deleted file mode 100644 index efc6c39..0000000 --- a/requirements/base.txt +++ /dev/null @@ -1,26 +0,0 @@ -# -# This file is autogenerated by pip-compile with Python 3.12 -# by the following command: -# -# make upgrade -# --e git+https://github.com/openedx/codejail.git@4127fc4bd5775cc72aee8d7f0a70e31405e22439#egg=codejail - # via -r requirements/base.in -certifi==2026.2.25 - # via requests -charset-normalizer==3.4.5 - # via requests -dogstatsd-python==0.5.6 - # via -r requirements/base.in -idna==3.11 - # via requests -path==17.1.1 - # via path-py -path-py==12.5.0 - # via -r requirements/base.in -requests==2.32.5 - # via -r requirements/base.in -six==1.17.0 - # via -r requirements/base.in -urllib3==2.6.3 - # via requests diff --git a/requirements/ci.in b/requirements/ci.in deleted file mode 100644 index 44cd1b5..0000000 --- a/requirements/ci.in +++ /dev/null @@ -1,6 +0,0 @@ -# Requirements for running tests in CI --c constraints.txt - --r test.txt - -coverage diff --git a/requirements/ci.txt b/requirements/ci.txt deleted file mode 100644 index 65193c0..0000000 --- a/requirements/ci.txt +++ /dev/null @@ -1,66 +0,0 @@ -# -# This file is autogenerated by pip-compile with Python 3.12 -# by the following command: -# -# make upgrade -# --e git+https://github.com/openedx/codejail.git@4127fc4bd5775cc72aee8d7f0a70e31405e22439#egg=codejail - # via -r requirements/test.txt -certifi==2026.2.25 - # via - # -r requirements/test.txt - # requests -charset-normalizer==3.4.5 - # via - # -r requirements/test.txt - # requests -coverage[toml]==7.13.4 - # via - # -r requirements/ci.in - # -r requirements/test.txt - # pytest-cov -dogstatsd-python==0.5.6 - # via -r requirements/test.txt -idna==3.11 - # via - # -r requirements/test.txt - # requests -iniconfig==2.3.0 - # via - # -r requirements/test.txt - # pytest -mock==5.2.0 - # via -r requirements/test.txt -packaging==26.0 - # via - # -r requirements/test.txt - # pytest -path==17.1.1 - # via - # -r requirements/test.txt - # path-py -path-py==12.5.0 - # via -r requirements/test.txt -pluggy==1.6.0 - # via - # -r requirements/test.txt - # pytest - # pytest-cov -pygments==2.19.2 - # via - # -r requirements/test.txt - # pytest -pytest==9.0.2 - # via - # -r requirements/test.txt - # pytest-cov -pytest-cov==7.0.0 - # via -r requirements/test.txt -requests==2.32.5 - # via -r requirements/test.txt -six==1.17.0 - # via -r requirements/test.txt -urllib3==2.6.3 - # via - # -r requirements/test.txt - # requests diff --git a/requirements/common_constraints.txt b/requirements/common_constraints.txt deleted file mode 100644 index 72cc4cc..0000000 --- a/requirements/common_constraints.txt +++ /dev/null @@ -1,20 +0,0 @@ -# A central location for most common version constraints -# (across edx repos) for pip-installation. -# -# Similar to other constraint files this file doesn't install any packages. -# It specifies version constraints that will be applied if a package is needed. -# When pinning something here, please provide an explanation of why it is a good -# idea to pin this package across all edx repos, Ideally, link to other information -# that will help people in the future to remove the pin when possible. -# Writing an issue against the offending project and linking to it here is good. -# -# Note: Changes to this file will automatically be used by other repos, referencing -# this file from Github directly. It does not require packaging in edx-lint. - -# using LTS django version -Django<6.0 - -# elasticsearch>=7.14.0 includes breaking changes in it which caused issues in discovery upgrade process. -# elastic search changelog: https://www.elastic.co/guide/en/enterprise-search/master/release-notes-7.14.0.html -# See https://github.com/openedx/edx-platform/issues/35126 for more info -elasticsearch<7.14.0 diff --git a/requirements/constraints.txt b/requirements/constraints.txt deleted file mode 100644 index 9d005a4..0000000 --- a/requirements/constraints.txt +++ /dev/null @@ -1,11 +0,0 @@ -# Version constraints for pip-installation. -# -# This file doesn't install any packages. It specifies version constraints -# that will be applied if a package is needed. -# -# When pinning something here, please provide an explanation of why. Ideally, -# link to other information that will help people in the future to remove the -# pin when possible. Writing an issue against the offending project and -# linking to it here is good. - --c common_constraints.txt \ No newline at end of file diff --git a/requirements/pip.in b/requirements/pip.in deleted file mode 100644 index 715478c..0000000 --- a/requirements/pip.in +++ /dev/null @@ -1,7 +0,0 @@ --c constraints.txt -# Core dependencies for installing other packages - -pip -setuptools -wheel - diff --git a/requirements/pip.txt b/requirements/pip.txt deleted file mode 100644 index 084d708..0000000 --- a/requirements/pip.txt +++ /dev/null @@ -1,16 +0,0 @@ -# -# This file is autogenerated by pip-compile with Python 3.12 -# by the following command: -# -# make upgrade -# -packaging==26.0 - # via wheel -wheel==0.46.3 - # via -r requirements/pip.in - -# The following packages are considered to be unsafe in a requirements file: -pip==26.0.1 - # via -r requirements/pip.in -setuptools==82.0.0 - # via -r requirements/pip.in diff --git a/requirements/pip_tools.in b/requirements/pip_tools.in deleted file mode 100644 index caf45a9..0000000 --- a/requirements/pip_tools.in +++ /dev/null @@ -1,4 +0,0 @@ - # Dependencies to run compile tools --c constraints.txt - -pip-tools # Contains pip-compile, used to generate pip requirements files diff --git a/requirements/pip_tools.txt b/requirements/pip_tools.txt deleted file mode 100644 index 107789a..0000000 --- a/requirements/pip_tools.txt +++ /dev/null @@ -1,26 +0,0 @@ -# -# This file is autogenerated by pip-compile with Python 3.12 -# by the following command: -# -# make upgrade -# -build==1.4.0 - # via pip-tools -click==8.3.1 - # via pip-tools -packaging==26.0 - # via - # build - # wheel -pip-tools==7.5.3 - # via -r requirements/pip_tools.in -pyproject-hooks==1.2.0 - # via - # build - # pip-tools -wheel==0.46.3 - # via pip-tools - -# The following packages are considered to be unsafe in a requirements file: -# pip -# setuptools diff --git a/requirements/production.in b/requirements/production.in deleted file mode 100644 index b739cac..0000000 --- a/requirements/production.in +++ /dev/null @@ -1,5 +0,0 @@ -# Production requirements for using this package - --c constraints.txt - --r base.txt diff --git a/requirements/production.txt b/requirements/production.txt deleted file mode 100644 index d60125e..0000000 --- a/requirements/production.txt +++ /dev/null @@ -1,36 +0,0 @@ -# -# This file is autogenerated by pip-compile with Python 3.12 -# by the following command: -# -# make upgrade -# --e git+https://github.com/openedx/codejail.git@4127fc4bd5775cc72aee8d7f0a70e31405e22439#egg=codejail - # via -r requirements/base.txt -certifi==2026.2.25 - # via - # -r requirements/base.txt - # requests -charset-normalizer==3.4.5 - # via - # -r requirements/base.txt - # requests -dogstatsd-python==0.5.6 - # via -r requirements/base.txt -idna==3.11 - # via - # -r requirements/base.txt - # requests -path==17.1.1 - # via - # -r requirements/base.txt - # path-py -path-py==12.5.0 - # via -r requirements/base.txt -requests==2.32.5 - # via -r requirements/base.txt -six==1.17.0 - # via -r requirements/base.txt -urllib3==2.6.3 - # via - # -r requirements/base.txt - # requests diff --git a/requirements/test.in b/requirements/test.in deleted file mode 100644 index a536201..0000000 --- a/requirements/test.in +++ /dev/null @@ -1,8 +0,0 @@ -# Requirements for test runs - --c constraints.txt - --r production.txt - -mock -pytest-cov diff --git a/requirements/test.txt b/requirements/test.txt deleted file mode 100644 index 6ef7fc0..0000000 --- a/requirements/test.txt +++ /dev/null @@ -1,54 +0,0 @@ -# -# This file is autogenerated by pip-compile with Python 3.12 -# by the following command: -# -# make upgrade -# --e git+https://github.com/openedx/codejail.git@4127fc4bd5775cc72aee8d7f0a70e31405e22439#egg=codejail - # via -r requirements/production.txt -certifi==2026.2.25 - # via - # -r requirements/production.txt - # requests -charset-normalizer==3.4.5 - # via - # -r requirements/production.txt - # requests -coverage[toml]==7.13.4 - # via pytest-cov -dogstatsd-python==0.5.6 - # via -r requirements/production.txt -idna==3.11 - # via - # -r requirements/production.txt - # requests -iniconfig==2.3.0 - # via pytest -mock==5.2.0 - # via -r requirements/test.in -packaging==26.0 - # via pytest -path==17.1.1 - # via - # -r requirements/production.txt - # path-py -path-py==12.5.0 - # via -r requirements/production.txt -pluggy==1.6.0 - # via - # pytest - # pytest-cov -pygments==2.19.2 - # via pytest -pytest==9.0.2 - # via pytest-cov -pytest-cov==7.0.0 - # via -r requirements/test.in -requests==2.32.5 - # via -r requirements/production.txt -six==1.17.0 - # via -r requirements/production.txt -urllib3==2.6.3 - # via - # -r requirements/production.txt - # requests diff --git a/setup.py b/setup.py deleted file mode 100644 index d8ce0ec..0000000 --- a/setup.py +++ /dev/null @@ -1,14 +0,0 @@ -from setuptools import setup - - -setup( - name='xqueue_watcher', - version='1.0.0', - description='XQueue Pull Grader', - packages=[ - 'grader_support', - 'xqueue_watcher', - ], - install_requires=open('requirements/production.txt', - 'rt', encoding='utf-8').readlines(), -) diff --git a/tests/fixtures/config/xqueue_servers.json b/tests/fixtures/config/xqueue_servers.json new file mode 100644 index 0000000..dafc4a5 --- /dev/null +++ b/tests/fixtures/config/xqueue_servers.json @@ -0,0 +1,6 @@ +{ + "fixture-server": { + "SERVER": "http://fixture-xqueue:18040", + "AUTH": ["fixture-user", "fixture-pass"] + } +} diff --git a/tests/test_container_grader.py b/tests/test_container_grader.py new file mode 100644 index 0000000..87a26df --- /dev/null +++ b/tests/test_container_grader.py @@ -0,0 +1,347 @@ +""" +Unit tests for ContainerGrader. + +Uses mock objects for the Docker SDK and kubernetes client to test container +execution paths without requiring a live Docker daemon or cluster. +""" + +import json +from pathlib import Path +from unittest import mock +from unittest.mock import patch + +import pytest + +from xqueue_watcher.containergrader import ContainerGrader, _parse_cpu_millis, _parse_memory_bytes + + +# --------------------------------------------------------------------------- +# Helpers +# --------------------------------------------------------------------------- + +def make_grader(backend="docker", **kwargs): + defaults = dict(grader_root="/graders", image="course-grader:v1", backend=backend) + defaults.update(kwargs) + return ContainerGrader(**defaults) + + +# --------------------------------------------------------------------------- +# _parse_cpu_millis +# --------------------------------------------------------------------------- + +class TestParseCpuMillis: + def test_millicores(self): + assert _parse_cpu_millis("500m") == 500.0 + + def test_whole_cores(self): + assert _parse_cpu_millis("2") == 2000.0 + + def test_fractional_cores(self): + assert _parse_cpu_millis("0.5") == 500.0 + + +# --------------------------------------------------------------------------- +# ContainerGrader.__init__ +# --------------------------------------------------------------------------- + +class TestContainerGraderInit: + def test_valid_kubernetes_backend(self): + g = make_grader(backend="kubernetes") + assert g.backend == "kubernetes" + + def test_valid_docker_backend(self): + g = make_grader(backend="docker") + assert g.backend == "docker" + + def test_invalid_backend(self): + with pytest.raises(ValueError, match="Unsupported backend"): + make_grader(backend="podman") + + def test_defaults_from_env_when_no_kwargs(self): + """With no kwargs, values come from env defaults (all at baseline).""" + g = ContainerGrader(grader_root="/graders", image="img:latest") + assert g.backend == "kubernetes" + assert g.namespace == "default" + assert g.cpu_limit == "500m" + assert g.memory_limit == "256Mi" + assert g.timeout == 20 + + def test_kwargs_override_env_defaults(self): + """Explicit kwargs always win over env defaults.""" + env = { + "XQWATCHER_GRADER_BACKEND": "docker", + "XQWATCHER_GRADER_NAMESPACE": "env-ns", + "XQWATCHER_GRADER_CPU_LIMIT": "250m", + "XQWATCHER_GRADER_MEMORY_LIMIT": "128Mi", + "XQWATCHER_GRADER_TIMEOUT": "5", + } + with patch.dict("os.environ", env): + g = ContainerGrader( + grader_root="/graders", + image="img:latest", + backend="kubernetes", + namespace="kwarg-ns", + cpu_limit="1000m", + memory_limit="512Mi", + timeout=99, + ) + assert g.backend == "kubernetes" + assert g.namespace == "kwarg-ns" + assert g.cpu_limit == "1000m" + assert g.memory_limit == "512Mi" + assert g.timeout == 99 + + def test_env_defaults_applied_when_no_kwargs(self): + """Env vars are used when the corresponding kwarg is absent.""" + env = { + "XQWATCHER_GRADER_BACKEND": "docker", + "XQWATCHER_GRADER_NAMESPACE": "grading", + "XQWATCHER_GRADER_CPU_LIMIT": "750m", + "XQWATCHER_GRADER_MEMORY_LIMIT": "512Mi", + "XQWATCHER_GRADER_TIMEOUT": "30", + } + with patch.dict("os.environ", env): + g = ContainerGrader(grader_root="/graders", image="img:latest") + assert g.backend == "docker" + assert g.namespace == "grading" + assert g.cpu_limit == "750m" + assert g.memory_limit == "512Mi" + assert g.timeout == 30 + + def test_invalid_backend_from_env_raises(self): + with patch.dict("os.environ", {"XQWATCHER_GRADER_BACKEND": "podman"}): + with pytest.raises(ValueError, match="Unsupported backend"): + ContainerGrader(grader_root="/graders", image="img:latest") + + +# --------------------------------------------------------------------------- +# _build_k8s_job +# --------------------------------------------------------------------------- + +class TestBuildK8sJob: + def setup_method(self): + self.grader = make_grader( + backend="kubernetes", + namespace="test-ns", + cpu_limit="1000m", + memory_limit="512Mi", + timeout=30, + ) + + def _build(self, job_name="test-job", grader_path="/graders/grade.py", code="code", seed=42): + return self.grader._build_k8s_job(job_name, grader_path, code, seed) + + def test_job_name(self): + job = self._build(job_name="xqueue-grader-abc123") + assert job.metadata.name == "xqueue-grader-abc123" + + def test_image(self): + job = self._build() + assert job.spec.template.spec.containers[0].image == "course-grader:v1" + + def test_args_are_grader_and_seed(self): + # entrypoint takes GRADER_FILE SEED (no submission.py positional arg) + job = self._build(grader_path="/graders/ps07/grade.py", seed=99) + assert job.spec.template.spec.containers[0].args == ["/graders/ps07/grade.py", "99"] + + def test_submission_code_env(self): + job = self._build(code="x = 1") + env = {e.name: e.value for e in job.spec.template.spec.containers[0].env} + assert env["SUBMISSION_CODE"] == "x = 1" + + def test_grader_language_env_default(self): + job = self._build() + env = {e.name: e.value for e in job.spec.template.spec.containers[0].env} + assert env["GRADER_LANGUAGE"] == "en" + + def test_grader_language_from_config(self): + job = self.grader._build_k8s_job("job", "/g/grade.py", "code", 1, {"lang": "fr"}) + env = {e.name: e.value for e in job.spec.template.spec.containers[0].env} + assert env["GRADER_LANGUAGE"] == "fr" + + def test_hide_output_env_default_off(self): + job = self._build() + env = {e.name: e.value for e in job.spec.template.spec.containers[0].env} + assert env["HIDE_OUTPUT"] == "0" + + def test_hide_output_env_when_set(self): + job = self.grader._build_k8s_job("job", "/g/grade.py", "code", 1, {"hide_output": True}) + env = {e.name: e.value for e in job.spec.template.spec.containers[0].env} + assert env["HIDE_OUTPUT"] == "1" + + def test_resource_limits(self): + job = self._build() + limits = job.spec.template.spec.containers[0].resources.limits + assert limits["cpu"] == "1000m" + assert limits["memory"] == "512Mi" + + def test_tmp_empty_dir_volume_present(self): + job = self._build() + volumes = job.spec.template.spec.volumes + tmp_vol = next((v for v in volumes if v.name == "tmp"), None) + assert tmp_vol is not None, "emptyDir volume at /tmp is required" + assert tmp_vol.empty_dir is not None + + def test_tmp_volume_mounted_at_tmp(self): + job = self._build() + mounts = job.spec.template.spec.containers[0].volume_mounts + tmp_mount = next((m for m in mounts if m.name == "tmp"), None) + assert tmp_mount is not None + assert tmp_mount.mount_path == "/tmp" + + def test_read_only_root_filesystem(self): + job = self._build() + sc = job.spec.template.spec.containers[0].security_context + assert sc.read_only_root_filesystem is True + + def test_no_privilege_escalation(self): + job = self._build() + sc = job.spec.template.spec.containers[0].security_context + assert sc.allow_privilege_escalation is False + + def test_backoff_limit_zero(self): + job = self._build() + assert job.spec.backoff_limit == 0 + + def test_active_deadline_matches_timeout(self): + job = self._build() + assert job.spec.active_deadline_seconds == 30 + + +# --------------------------------------------------------------------------- +# _run_docker +# --------------------------------------------------------------------------- + +def _make_mock_client(exit_code=0, stdout_data=b'{"correct": true}', stderr_data=b""): + """Return a (client, container) pair pre-configured with given outputs.""" + container = mock.MagicMock() + container.wait.return_value = {"StatusCode": exit_code} + + def logs_side_effect(stdout=True, stderr=False): + if stderr and not stdout: + return stderr_data + return stdout_data + + container.logs.side_effect = logs_side_effect + client = mock.MagicMock() + client.containers.run.return_value = container + return client, container + + +class TestRunDocker: + def setup_method(self): + self.grader = make_grader(backend="docker", timeout=10) + + def _run(self, client, code="print('hi')", seed=42, grader_config=None): + with mock.patch("docker.from_env", return_value=client): + return self.grader._run_docker( + "/graders/ps07/grade.py", code, seed, grader_config or {} + ) + + def test_success_returns_stdout(self): + client, _ = _make_mock_client(stdout_data=b'{"correct": true}') + result = self._run(client) + assert result == b'{"correct": true}' + + def test_container_removed_on_success(self): + client, container = _make_mock_client() + self._run(client) + container.remove.assert_called_once_with(force=True) + + def test_container_removed_on_failure(self): + client, container = _make_mock_client(exit_code=1, stderr_data=b"Traceback...") + with pytest.raises(RuntimeError): + self._run(client) + container.remove.assert_called_once_with(force=True) + + def test_non_zero_exit_raises(self): + client, _ = _make_mock_client(exit_code=1, stderr_data=b"Error!") + with pytest.raises(RuntimeError, match="non-zero status"): + self._run(client) + + def test_stderr_included_in_error_message(self): + client, _ = _make_mock_client(exit_code=1, stderr_data=b"ImportError: missing module") + with pytest.raises(RuntimeError, match="ImportError"): + self._run(client) + + def test_timeout_raises_runtime_error(self): + client, container = _make_mock_client() + container.wait.side_effect = Exception("ReadTimeout") + with pytest.raises(RuntimeError, match="timed out"): + self._run(client) + + def test_missing_docker_sdk_raises(self): + with mock.patch.dict("sys.modules", {"docker": None}): + with pytest.raises(RuntimeError, match="'docker' package"): + self.grader._run_docker("/graders/grade.py", "code", 1, {}) + + def test_string_result_converted_to_bytes(self): + client, container = _make_mock_client() + container.logs.side_effect = None + container.logs.return_value = '{"correct": false}' + result = self._run(client) + assert isinstance(result, bytes) + + def test_entrypoint_args_are_grader_and_seed(self): + """Container command should be [grader_path, seed] — not 3 args.""" + client, _ = _make_mock_client() + self._run(client, seed=99) + call_kwargs = client.containers.run.call_args + command = call_kwargs.kwargs.get("command") or call_kwargs[1].get("command") + assert len(command) == 2 + assert command[1] == "99" + + def test_grader_language_passed_as_env(self): + client, _ = _make_mock_client() + self._run(client, grader_config={"lang": "es"}) + call_kwargs = client.containers.run.call_args + env = call_kwargs.kwargs.get("environment") or call_kwargs[1].get("environment") + assert env.get("GRADER_LANGUAGE") == "es" + + +# --------------------------------------------------------------------------- +# grade() public interface +# --------------------------------------------------------------------------- + +class TestGrade: + def setup_method(self): + self.grader = make_grader() + + def _grade(self, submission="x = 1", grader_config=None): + if grader_config is None: + grader_config = {} + return self.grader.grade( + grader_path=Path("/graders/ps07/grade.py"), + grader_config=grader_config, + submission=submission, + ) + + def test_skip_grader_returns_correct(self): + result = self._grade(grader_config={"skip_grader": True}) + assert result["correct"] is True + assert result["score"] == 1 + + def test_container_result_returned_directly(self): + grade_json = {"correct": True, "score": 1.0, "errors": [], "tests": []} + with mock.patch.object(self.grader, "_run", return_value=json.dumps(grade_json).encode()): + result = self._grade() + assert result["correct"] is True + assert result["score"] == 1.0 + + def test_container_failure_returns_error_dict(self): + with mock.patch.object(self.grader, "_run", side_effect=RuntimeError("container died")): + result = self._grade() + assert result["correct"] is False + assert result["errors"] + + def test_large_submission_logs_warning(self, caplog): + import logging + large_code = "x = 1\n" * 10_000 # ~70 KB + grade_json = {"correct": False, "score": 0.0, "errors": [], "tests": []} + # Mock the backend method so _run() still executes the size check. + with mock.patch.object( + self.grader, "_run_docker", return_value=json.dumps(grade_json).encode() + ): + with caplog.at_level(logging.WARNING): + self._grade(submission=large_code) + assert any("large" in r.message.lower() for r in caplog.records) diff --git a/tests/test_env_settings.py b/tests/test_env_settings.py new file mode 100644 index 0000000..7def87f --- /dev/null +++ b/tests/test_env_settings.py @@ -0,0 +1,219 @@ +import logging +import json +import tempfile +import unittest +from pathlib import Path +from unittest.mock import patch + +from xqueue_watcher.env_settings import configure_logging, get_container_grader_defaults, get_manager_config_from_env +from xqueue_watcher.settings import MANAGER_CONFIG_DEFAULTS, get_xqueue_servers + + +class TestConfigureLogging(unittest.TestCase): + def tearDown(self): + # Reset root logger after each test so handlers don't accumulate. + root = logging.getLogger() + root.handlers.clear() + root.setLevel(logging.WARNING) + + def test_default_level_is_info(self): + with patch.dict("os.environ", {}, clear=False): + configure_logging() + self.assertEqual(logging.getLogger().level, logging.INFO) + + def test_custom_level_from_env(self): + with patch.dict("os.environ", {"XQWATCHER_LOG_LEVEL": "DEBUG"}): + configure_logging() + self.assertEqual(logging.getLogger().level, logging.DEBUG) + + def test_stdout_handler_installed(self): + import sys + with patch.dict("os.environ", {}, clear=False): + configure_logging() + handlers = logging.getLogger().handlers + self.assertEqual(len(handlers), 1) + self.assertIsInstance(handlers[0], logging.StreamHandler) + self.assertIs(handlers[0].stream, sys.stdout) + + def test_requests_logger_set_to_warning(self): + with patch.dict("os.environ", {"XQWATCHER_LOG_LEVEL": "DEBUG"}): + configure_logging() + self.assertEqual(logging.getLogger("requests").level, logging.WARNING) + self.assertEqual(logging.getLogger("urllib3").level, logging.WARNING) + + def test_invalid_level_raises(self): + with patch.dict("os.environ", {"XQWATCHER_LOG_LEVEL": "NOTLEVEL"}): + with self.assertRaises(ValueError): + configure_logging() + + +class TestGetManagerConfigFromEnv(unittest.TestCase): + def test_defaults_when_no_env_vars_set(self): + with patch.dict("os.environ", {}, clear=True): + config = get_manager_config_from_env() + self.assertEqual(config, MANAGER_CONFIG_DEFAULTS) + + def test_poll_time_from_env(self): + with patch.dict("os.environ", {"XQWATCHER_POLL_TIME": "30"}): + config = get_manager_config_from_env() + self.assertEqual(config["POLL_TIME"], 30) + + def test_requests_timeout_from_env(self): + with patch.dict("os.environ", {"XQWATCHER_REQUESTS_TIMEOUT": "5"}): + config = get_manager_config_from_env() + self.assertEqual(config["REQUESTS_TIMEOUT"], 5) + + def test_poll_interval_from_env(self): + with patch.dict("os.environ", {"XQWATCHER_POLL_INTERVAL": "3"}): + config = get_manager_config_from_env() + self.assertEqual(config["POLL_INTERVAL"], 3) + + def test_login_poll_interval_from_env(self): + with patch.dict("os.environ", {"XQWATCHER_LOGIN_POLL_INTERVAL": "15"}): + config = get_manager_config_from_env() + self.assertEqual(config["LOGIN_POLL_INTERVAL"], 15) + + def test_http_basic_auth_from_env(self): + with patch.dict("os.environ", {"XQWATCHER_HTTP_BASIC_AUTH": "user:secret"}): + config = get_manager_config_from_env() + self.assertEqual(config["HTTP_BASIC_AUTH"], ("user", "secret")) + + def test_http_basic_auth_empty_string_returns_none(self): + with patch.dict("os.environ", {"XQWATCHER_HTTP_BASIC_AUTH": ""}): + config = get_manager_config_from_env() + self.assertIsNone(config["HTTP_BASIC_AUTH"]) + + def test_follow_client_redirects_true_values(self): + for truthy in ("true", "True", "TRUE", "1", "yes", "YES"): + with self.subTest(value=truthy): + with patch.dict("os.environ", {"XQWATCHER_FOLLOW_CLIENT_REDIRECTS": truthy}): + config = get_manager_config_from_env() + self.assertTrue(config["FOLLOW_CLIENT_REDIRECTS"]) + + def test_follow_client_redirects_false_values(self): + for falsy in ("false", "False", "FALSE", "0", "no", "NO"): + with self.subTest(value=falsy): + with patch.dict("os.environ", {"XQWATCHER_FOLLOW_CLIENT_REDIRECTS": falsy}): + config = get_manager_config_from_env() + self.assertFalse(config["FOLLOW_CLIENT_REDIRECTS"]) + + def test_follow_client_redirects_default_is_false(self): + with patch.dict("os.environ", {}, clear=True): + config = get_manager_config_from_env() + self.assertFalse(config["FOLLOW_CLIENT_REDIRECTS"]) + + def test_all_env_vars_together(self): + env = { + "XQWATCHER_HTTP_BASIC_AUTH": "admin:pass", + "XQWATCHER_POLL_TIME": "20", + "XQWATCHER_REQUESTS_TIMEOUT": "3", + "XQWATCHER_POLL_INTERVAL": "2", + "XQWATCHER_LOGIN_POLL_INTERVAL": "10", + "XQWATCHER_FOLLOW_CLIENT_REDIRECTS": "true", + } + with patch.dict("os.environ", env): + config = get_manager_config_from_env() + self.assertEqual(config["HTTP_BASIC_AUTH"], ("admin", "pass")) + self.assertEqual(config["POLL_TIME"], 20) + self.assertEqual(config["REQUESTS_TIMEOUT"], 3) + self.assertEqual(config["POLL_INTERVAL"], 2) + self.assertEqual(config["LOGIN_POLL_INTERVAL"], 10) + self.assertTrue(config["FOLLOW_CLIENT_REDIRECTS"]) + + def test_returns_all_expected_keys(self): + config = get_manager_config_from_env() + self.assertEqual(set(config.keys()), set(MANAGER_CONFIG_DEFAULTS.keys())) + + +class TestGetContainerGraderDefaults(unittest.TestCase): + def test_built_in_defaults_when_no_env(self): + with patch.dict("os.environ", {}, clear=False): + d = get_container_grader_defaults() + self.assertEqual(d["backend"], "kubernetes") + self.assertEqual(d["namespace"], "default") + self.assertEqual(d["cpu_limit"], "500m") + self.assertEqual(d["memory_limit"], "256Mi") + self.assertEqual(d["timeout"], 20) + + def test_backend_from_env(self): + with patch.dict("os.environ", {"XQWATCHER_GRADER_BACKEND": "docker"}): + d = get_container_grader_defaults() + self.assertEqual(d["backend"], "docker") + + def test_namespace_from_env(self): + with patch.dict("os.environ", {"XQWATCHER_GRADER_NAMESPACE": "grading"}): + d = get_container_grader_defaults() + self.assertEqual(d["namespace"], "grading") + + def test_cpu_limit_from_env(self): + with patch.dict("os.environ", {"XQWATCHER_GRADER_CPU_LIMIT": "1"}): + d = get_container_grader_defaults() + self.assertEqual(d["cpu_limit"], "1") + + def test_memory_limit_from_env(self): + with patch.dict("os.environ", {"XQWATCHER_GRADER_MEMORY_LIMIT": "1Gi"}): + d = get_container_grader_defaults() + self.assertEqual(d["memory_limit"], "1Gi") + + def test_timeout_from_env(self): + with patch.dict("os.environ", {"XQWATCHER_GRADER_TIMEOUT": "60"}): + d = get_container_grader_defaults() + self.assertEqual(d["timeout"], 60) + + def test_all_grader_env_vars_together(self): + env = { + "XQWATCHER_GRADER_BACKEND": "docker", + "XQWATCHER_GRADER_NAMESPACE": "ci", + "XQWATCHER_GRADER_CPU_LIMIT": "250m", + "XQWATCHER_GRADER_MEMORY_LIMIT": "128Mi", + "XQWATCHER_GRADER_TIMEOUT": "10", + } + with patch.dict("os.environ", env): + d = get_container_grader_defaults() + self.assertEqual(d["backend"], "docker") + self.assertEqual(d["namespace"], "ci") + self.assertEqual(d["cpu_limit"], "250m") + self.assertEqual(d["memory_limit"], "128Mi") + self.assertEqual(d["timeout"], 10) + + +class TestGetXqueueServers(unittest.TestCase): + def _write_servers_file(self, tmp_dir, data): + path = Path(tmp_dir) / "xqueue_servers.json" + path.write_text(json.dumps(data)) + return path + + def test_returns_empty_dict_when_file_absent(self): + with tempfile.TemporaryDirectory() as tmp: + path = Path(tmp) / "xqueue_servers.json" + self.assertEqual(get_xqueue_servers(path), {}) + + def test_returns_named_servers(self): + data = { + "default": {"SERVER": "http://xqueue:18040", "AUTH": ["u", "p"]}, + "secondary": {"SERVER": "http://xqueue2:18040", "AUTH": ["u2", "p2"]}, + } + with tempfile.TemporaryDirectory() as tmp: + path = self._write_servers_file(tmp, data) + servers = get_xqueue_servers(path) + self.assertEqual(set(servers.keys()), {"default", "secondary"}) + self.assertEqual(servers["default"]["SERVER"], "http://xqueue:18040") + self.assertEqual(servers["default"]["AUTH"], ["u", "p"]) + + def test_missing_server_key_raises(self): + data = {"bad": {"AUTH": ["u", "p"]}} + with tempfile.TemporaryDirectory() as tmp: + path = self._write_servers_file(tmp, data) + with self.assertRaises(ValueError) as ctx: + get_xqueue_servers(path) + self.assertIn("SERVER", str(ctx.exception)) + self.assertIn("bad", str(ctx.exception)) + + def test_missing_auth_key_raises(self): + data = {"bad": {"SERVER": "http://xqueue:18040"}} + with tempfile.TemporaryDirectory() as tmp: + path = self._write_servers_file(tmp, data) + with self.assertRaises(ValueError) as ctx: + get_xqueue_servers(path) + self.assertIn("AUTH", str(ctx.exception)) + self.assertIn("bad", str(ctx.exception)) diff --git a/tests/test_grader.py b/tests/test_grader.py index 9093983..97662f1 100644 --- a/tests/test_grader.py +++ b/tests/test_grader.py @@ -2,12 +2,12 @@ from unittest import mock import json import sys -from path import Path +from pathlib import Path from queue import Queue from xqueue_watcher import grader -MYDIR = Path(__file__).dirname() / 'fixtures' +MYDIR = Path(__file__).parent / 'fixtures' class MockGrader(grader.Grader): @@ -16,12 +16,12 @@ def grade(self, grader_path, grader_config, student_response): errors = [] correct = 0 score = 0 - if grader_path.endswith('/correct'): + if grader_path.name == 'correct': correct = 1 score = 1 tests.append(('short', 'long', True, 'expected', 'actual')) tests.append(('short', '', True, 'expected', 'actual')) - elif grader_path.endswith('/incorrect'): + elif grader_path.name == 'incorrect': tests.append(('short', 'long', False, 'expected', 'actual')) errors.append('THIS IS AN ERROR') errors.append('\x00\xc3\x83\xc3\xb8\x02') diff --git a/tests/test_jailed_grader.py b/tests/test_jailed_grader.py index e4ade77..1c989cf 100644 --- a/tests/test_jailed_grader.py +++ b/tests/test_jailed_grader.py @@ -3,12 +3,20 @@ import sys import textwrap import unittest -from path import Path +from pathlib import Path + +import pytest + +try: + from codejail.jail_code import configure + HAS_CODEJAIL = True +except ImportError: + HAS_CODEJAIL = False from xqueue_watcher.jailedgrader import JailedGrader -from codejail.jail_code import configure +@pytest.mark.skipif(not HAS_CODEJAIL, reason="codejail not installed") class JailedGraderTests(unittest.TestCase): def setUp(self): configure("python", sys.executable, user=getpass.getuser()) @@ -21,7 +29,7 @@ def setUp(self): user=getpass.getuser(), ) break - self.grader_root = Path(__file__).dirname() / 'fixtures' + self.grader_root = Path(__file__).parent / 'fixtures' self.g = JailedGrader(grader_root=self.grader_root) self.g3 = JailedGrader(grader_root=self.grader_root, codejail_python='python3') diff --git a/tests/test_manager.py b/tests/test_manager.py index 7b99bd7..b3eee4f 100644 --- a/tests/test_manager.py +++ b/tests/test_manager.py @@ -1,5 +1,5 @@ import unittest -from path import Path +from pathlib import Path import json from unittest.mock import Mock import time @@ -76,8 +76,11 @@ def test_codejail_config(self): }) self.assertTrue(codejail.jail_code.is_configured("other-python")) - # now we'll see if the codejail config is inherited in the handler subprocess + # Verify codejail config is visible to the grader running in the same process. + # (fork_per_item=False avoids relying on multiprocessing start-method-specific + # state inheritance, which varies between 'fork' and 'forkserver'.) handler_config = self.config['test1'].copy() + handler_config['HANDLERS'][0]['KWARGS'] = {'fork_per_item': False} client = self.m.client_from_config("test", handler_config) client.session = MockXQueueServer() client._handle_submission(json.dumps({ @@ -162,6 +165,90 @@ def test_main_with_errors(self): self.assertIn('required', err_msg) sys.stderr = stderr - mydir = Path(__file__).dirname() - args = ['-d', mydir / "fixtures/config"] + mydir = Path(__file__).parent + args = ['-d', str(mydir / "fixtures/config")] self.assertEqual(manager.main(args), 0) + + +class ServerRefTests(unittest.TestCase): + def setUp(self): + self.m = manager.Manager() + self.m.xqueue_servers = { + "primary": { + "SERVER": "http://primary-xqueue:18040", + "AUTH": ["user1", "pass1"], + }, + } + + def tearDown(self): + try: + self.m.shutdown() + except SystemExit: + pass + + def _simple_config(self, queue_config): + """Wrap a single queue config dict with a handler.""" + return { + "HANDLERS": [{"HANDLER": "tests.test_grader.MockGrader"}], + **queue_config, + } + + def test_server_ref_resolves_url_and_auth(self): + config = self._simple_config({"SERVER_REF": "primary"}) + client = self.m.client_from_config("my-queue", config) + self.assertEqual(client.xqueue_server, "http://primary-xqueue:18040") + self.assertEqual(client.username, "user1") + self.assertEqual(client.password, "pass1") + + def test_server_ref_unknown_raises(self): + config = self._simple_config({"SERVER_REF": "nonexistent"}) + with self.assertRaises(ValueError) as ctx: + self.m.client_from_config("my-queue", config) + self.assertIn("nonexistent", str(ctx.exception)) + + def test_server_ref_with_server_key_raises(self): + config = self._simple_config({ + "SERVER_REF": "primary", + "SERVER": "http://other:18040", + }) + with self.assertRaises(ValueError) as ctx: + self.m.client_from_config("my-queue", config) + self.assertIn("SERVER_REF", str(ctx.exception)) + + def test_server_ref_with_auth_key_raises(self): + config = self._simple_config({ + "SERVER_REF": "primary", + "AUTH": ["u", "p"], + }) + with self.assertRaises(ValueError) as ctx: + self.m.client_from_config("my-queue", config) + self.assertIn("SERVER_REF", str(ctx.exception)) + + def test_no_server_ref_still_works(self): + config = self._simple_config({ + "SERVER": "http://direct:18040", + "AUTH": ["u", "p"], + }) + client = self.m.client_from_config("my-queue", config) + self.assertEqual(client.xqueue_server, "http://direct:18040") + self.assertEqual(client.username, "u") + + def test_configure_from_directory_loads_xqueue_servers(self): + mydir = Path(__file__).parent + m = manager.Manager() + m.configure_from_directory(mydir / "fixtures/config") + self.assertIn("fixture-server", m.xqueue_servers) + self.assertEqual( + m.xqueue_servers["fixture-server"]["SERVER"], + "http://fixture-xqueue:18040", + ) + + def test_configure_from_directory_no_servers_file(self, tmp_path=None): + import tempfile, os + with tempfile.TemporaryDirectory() as tmp: + tmp = Path(tmp) + (tmp / "conf.d").mkdir() + (tmp / "conf.d" / "empty.json").write_text("{}") + m = manager.Manager() + m.configure_from_directory(tmp) + self.assertEqual(m.xqueue_servers, {}) diff --git a/tests/test_metrics.py b/tests/test_metrics.py new file mode 100644 index 0000000..c21c7f4 --- /dev/null +++ b/tests/test_metrics.py @@ -0,0 +1,90 @@ +import unittest +from unittest.mock import patch, MagicMock + +from opentelemetry.sdk.metrics import MeterProvider +from opentelemetry.sdk.metrics.export import InMemoryMetricReader + +from xqueue_watcher.metrics import ( + _build_meter_provider, + _METER_NAME, + _DEFAULT_SERVICE_NAME, +) + + +class TestBuildMeterProvider(unittest.TestCase): + def test_returns_meter_provider(self): + with patch.dict("os.environ", {}, clear=True): + provider = _build_meter_provider() + self.assertIsInstance(provider, MeterProvider) + + def test_no_otlp_endpoint_means_no_readers(self): + env = {"OTEL_EXPORTER_OTLP_ENDPOINT": ""} + with patch.dict("os.environ", env): + provider = _build_meter_provider() + # No PeriodicExportingMetricReader attached → internal reader list is empty. + self.assertEqual(provider._sdk_config.metric_readers, []) + + def test_otlp_endpoint_adds_reader(self): + env = {"OTEL_EXPORTER_OTLP_ENDPOINT": "http://otel-collector:4318"} + mock_exporter = MagicMock() + mock_reader = MagicMock() + with patch.dict("os.environ", env), \ + patch("opentelemetry.exporter.otlp.proto.http.metric_exporter.OTLPMetricExporter", + return_value=mock_exporter) as MockExporter, \ + patch("xqueue_watcher.metrics.PeriodicExportingMetricReader", + return_value=mock_reader) as MockReader: + provider = _build_meter_provider() + MockExporter.assert_called_once() + MockReader.assert_called_once_with(mock_exporter) + self.assertIn(mock_reader, provider._sdk_config.metric_readers) + + def test_default_service_name_applied(self): + # Empty OTEL_SERVICE_NAME should still fall back to the built-in default. + with patch.dict("os.environ", {"OTEL_SERVICE_NAME": ""}): + provider = _build_meter_provider() + attrs = provider._sdk_config.resource.attributes + self.assertEqual(attrs.get("service.name"), _DEFAULT_SERVICE_NAME) + + def test_custom_service_name_applied(self): + env = {"OTEL_SERVICE_NAME": "my-grader"} + with patch.dict("os.environ", env): + provider = _build_meter_provider() + attrs = provider._sdk_config.resource.attributes + self.assertEqual(attrs.get("service.name"), "my-grader") + + +class TestInstruments(unittest.TestCase): + """Verify instruments record correctly against an in-memory provider.""" + + def setUp(self): + self.reader = InMemoryMetricReader() + self.provider = MeterProvider(metric_readers=[self.reader]) + self.meter = self.provider.get_meter(_METER_NAME) + + def _metric_names(self): + return {m.name for m in self.reader.get_metrics_data().resource_metrics[0].scope_metrics[0].metrics} + + def test_process_item_counter(self): + counter = self.meter.create_counter("xqueuewatcher.process_item") + counter.add(1) + counter.add(2) + names = self._metric_names() + self.assertIn("xqueuewatcher.process_item", names) + + def test_grader_payload_error_counter(self): + counter = self.meter.create_counter("xqueuewatcher.grader_payload_error") + counter.add(1) + names = self._metric_names() + self.assertIn("xqueuewatcher.grader_payload_error", names) + + def test_grading_time_histogram(self): + hist = self.meter.create_histogram("xqueuewatcher.grading_time", unit="s") + hist.record(0.42) + names = self._metric_names() + self.assertIn("xqueuewatcher.grading_time", names) + + def test_replies_counter(self): + counter = self.meter.create_counter("xqueuewatcher.replies") + counter.add(1) + names = self._metric_names() + self.assertIn("xqueuewatcher.replies", names) diff --git a/uv.lock b/uv.lock new file mode 100644 index 0000000..747fd5b --- /dev/null +++ b/uv.lock @@ -0,0 +1,726 @@ +version = 1 +revision = 3 +requires-python = ">=3.11" + +[[package]] +name = "certifi" +version = "2026.2.25" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/af/2d/7bf41579a8986e348fa033a31cdd0e4121114f6bce2457e8876010b092dd/certifi-2026.2.25.tar.gz", hash = "sha256:e887ab5cee78ea814d3472169153c2d12cd43b14bd03329a39a9c6e2e80bfba7", size = 155029, upload-time = "2026-02-25T02:54:17.342Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/9a/3c/c17fb3ca2d9c3acff52e30b309f538586f9f5b9c9cf454f3845fc9af4881/certifi-2026.2.25-py3-none-any.whl", hash = "sha256:027692e4402ad994f1c42e52a4997a9763c646b73e4096e4d5d6db8af1d6f0fa", size = 153684, upload-time = "2026-02-25T02:54:15.766Z" }, +] + +[[package]] +name = "charset-normalizer" +version = "3.4.5" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/1d/35/02daf95b9cd686320bb622eb148792655c9412dbb9b67abb5694e5910a24/charset_normalizer-3.4.5.tar.gz", hash = "sha256:95adae7b6c42a6c5b5b559b1a99149f090a57128155daeea91732c8d970d8644", size = 134804, upload-time = "2026-03-06T06:03:19.46Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/8f/9e/bcec3b22c64ecec47d39bf5167c2613efd41898c019dccd4183f6aa5d6a7/charset_normalizer-3.4.5-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:610f72c0ee565dfb8ae1241b666119582fdbfe7c0975c175be719f940e110694", size = 279531, upload-time = "2026-03-06T06:00:52.252Z" }, + { url = "https://files.pythonhosted.org/packages/58/12/81fd25f7e7078ab5d1eedbb0fac44be4904ae3370a3bf4533c8f2d159acd/charset_normalizer-3.4.5-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:60d68e820af339df4ae8358c7a2e7596badeb61e544438e489035f9fbf3246a5", size = 188006, upload-time = "2026-03-06T06:00:53.8Z" }, + { url = "https://files.pythonhosted.org/packages/ae/6e/f2d30e8c27c1b0736a6520311982cf5286cfc7f6cac77d7bc1325e3a23f2/charset_normalizer-3.4.5-cp311-cp311-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:10b473fc8dca1c3ad8559985794815f06ca3fc71942c969129070f2c3cdf7281", size = 205085, upload-time = "2026-03-06T06:00:55.311Z" }, + { url = "https://files.pythonhosted.org/packages/d0/90/d12cefcb53b5931e2cf792a33718d7126efb116a320eaa0742c7059a95e4/charset_normalizer-3.4.5-cp311-cp311-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:d4eb8ac7469b2a5d64b5b8c04f84d8bf3ad340f4514b98523805cbf46e3b3923", size = 200545, upload-time = "2026-03-06T06:00:56.532Z" }, + { url = "https://files.pythonhosted.org/packages/03/f4/44d3b830a20e89ff82a3134912d9a1cf6084d64f3b95dcad40f74449a654/charset_normalizer-3.4.5-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:5bcb3227c3d9aaf73eaaab1db7ccd80a8995c509ee9941e2aae060ca6e4e5d81", size = 193863, upload-time = "2026-03-06T06:00:57.823Z" }, + { url = "https://files.pythonhosted.org/packages/25/4b/f212119c18a6320a9d4a730d1b4057875cdeabf21b3614f76549042ef8a8/charset_normalizer-3.4.5-cp311-cp311-manylinux_2_31_armv7l.whl", hash = "sha256:75ee9c1cce2911581a70a3c0919d8bccf5b1cbc9b0e5171400ec736b4b569497", size = 181827, upload-time = "2026-03-06T06:00:59.323Z" }, + { url = "https://files.pythonhosted.org/packages/74/00/b26158e48b425a202a92965f8069e8a63d9af1481dfa206825d7f74d2a3c/charset_normalizer-3.4.5-cp311-cp311-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:1d1401945cb77787dbd3af2446ff2d75912327c4c3a1526ab7955ecf8600687c", size = 191085, upload-time = "2026-03-06T06:01:00.546Z" }, + { url = "https://files.pythonhosted.org/packages/c4/c2/1c1737bf6fd40335fe53d28fe49afd99ee4143cc57a845e99635ce0b9b6d/charset_normalizer-3.4.5-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:0a45e504f5e1be0bd385935a8e1507c442349ca36f511a47057a71c9d1d6ea9e", size = 190688, upload-time = "2026-03-06T06:01:02.479Z" }, + { url = "https://files.pythonhosted.org/packages/5a/3d/abb5c22dc2ef493cd56522f811246a63c5427c08f3e3e50ab663de27fcf4/charset_normalizer-3.4.5-cp311-cp311-musllinux_1_2_armv7l.whl", hash = "sha256:e09f671a54ce70b79a1fc1dc6da3072b7ef7251fadb894ed92d9aa8218465a5f", size = 183077, upload-time = "2026-03-06T06:01:04.231Z" }, + { url = "https://files.pythonhosted.org/packages/44/33/5298ad4d419a58e25b3508e87f2758d1442ff00c2471f8e0403dab8edad5/charset_normalizer-3.4.5-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:d01de5e768328646e6a3fa9e562706f8f6641708c115c62588aef2b941a4f88e", size = 206706, upload-time = "2026-03-06T06:01:05.773Z" }, + { url = "https://files.pythonhosted.org/packages/7b/17/51e7895ac0f87c3b91d276a449ef09f5532a7529818f59646d7a55089432/charset_normalizer-3.4.5-cp311-cp311-musllinux_1_2_riscv64.whl", hash = "sha256:131716d6786ad5e3dc542f5cc6f397ba3339dc0fb87f87ac30e550e8987756af", size = 191665, upload-time = "2026-03-06T06:01:07.473Z" }, + { url = "https://files.pythonhosted.org/packages/90/8f/cce9adf1883e98906dbae380d769b4852bb0fa0004bc7d7a2243418d3ea8/charset_normalizer-3.4.5-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:1a374cc0b88aa710e8865dc1bd6edb3743c59f27830f0293ab101e4cf3ce9f85", size = 201950, upload-time = "2026-03-06T06:01:08.973Z" }, + { url = "https://files.pythonhosted.org/packages/08/ca/bce99cd5c397a52919e2769d126723f27a4c037130374c051c00470bcd38/charset_normalizer-3.4.5-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:d31f0d1671e1534e395f9eb84a68e0fb670e1edb1fe819a9d7f564ae3bc4e53f", size = 195830, upload-time = "2026-03-06T06:01:10.155Z" }, + { url = "https://files.pythonhosted.org/packages/87/4f/2e3d023a06911f1281f97b8f036edc9872167036ca6f55cc874a0be6c12c/charset_normalizer-3.4.5-cp311-cp311-win32.whl", hash = "sha256:cace89841c0599d736d3d74a27bc5821288bb47c5441923277afc6059d7fbcb4", size = 132029, upload-time = "2026-03-06T06:01:11.706Z" }, + { url = "https://files.pythonhosted.org/packages/fe/1f/a853b73d386521fd44b7f67ded6b17b7b2367067d9106a5c4b44f9a34274/charset_normalizer-3.4.5-cp311-cp311-win_amd64.whl", hash = "sha256:f8102ae93c0bc863b1d41ea0f4499c20a83229f52ed870850892df555187154a", size = 142404, upload-time = "2026-03-06T06:01:12.865Z" }, + { url = "https://files.pythonhosted.org/packages/b4/10/dba36f76b71c38e9d391abe0fd8a5b818790e053c431adecfc98c35cd2a9/charset_normalizer-3.4.5-cp311-cp311-win_arm64.whl", hash = "sha256:ed98364e1c262cf5f9363c3eca8c2df37024f52a8fa1180a3610014f26eac51c", size = 132796, upload-time = "2026-03-06T06:01:14.106Z" }, + { url = "https://files.pythonhosted.org/packages/9c/b6/9ee9c1a608916ca5feae81a344dffbaa53b26b90be58cc2159e3332d44ec/charset_normalizer-3.4.5-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:ed97c282ee4f994ef814042423a529df9497e3c666dca19be1d4cd1129dc7ade", size = 280976, upload-time = "2026-03-06T06:01:15.276Z" }, + { url = "https://files.pythonhosted.org/packages/f8/d8/a54f7c0b96f1df3563e9190f04daf981e365a9b397eedfdfb5dbef7e5c6c/charset_normalizer-3.4.5-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:0294916d6ccf2d069727d65973c3a1ca477d68708db25fd758dd28b0827cff54", size = 189356, upload-time = "2026-03-06T06:01:16.511Z" }, + { url = "https://files.pythonhosted.org/packages/42/69/2bf7f76ce1446759a5787cb87d38f6a61eb47dbbdf035cfebf6347292a65/charset_normalizer-3.4.5-cp312-cp312-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:dc57a0baa3eeedd99fafaef7511b5a6ef4581494e8168ee086031744e2679467", size = 206369, upload-time = "2026-03-06T06:01:17.853Z" }, + { url = "https://files.pythonhosted.org/packages/10/9c/949d1a46dab56b959d9a87272482195f1840b515a3380e39986989a893ae/charset_normalizer-3.4.5-cp312-cp312-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:ed1a9a204f317ef879b32f9af507d47e49cd5e7f8e8d5d96358c98373314fc60", size = 203285, upload-time = "2026-03-06T06:01:19.473Z" }, + { url = "https://files.pythonhosted.org/packages/67/5c/ae30362a88b4da237d71ea214a8c7eb915db3eec941adda511729ac25fa2/charset_normalizer-3.4.5-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:7ad83b8f9379176c841f8865884f3514d905bcd2a9a3b210eaa446e7d2223e4d", size = 196274, upload-time = "2026-03-06T06:01:20.728Z" }, + { url = "https://files.pythonhosted.org/packages/b2/07/c9f2cb0e46cb6d64fdcc4f95953747b843bb2181bda678dc4e699b8f0f9a/charset_normalizer-3.4.5-cp312-cp312-manylinux_2_31_armv7l.whl", hash = "sha256:a118e2e0b5ae6b0120d5efa5f866e58f2bb826067a646431da4d6a2bdae7950e", size = 184715, upload-time = "2026-03-06T06:01:22.194Z" }, + { url = "https://files.pythonhosted.org/packages/36/64/6b0ca95c44fddf692cd06d642b28f63009d0ce325fad6e9b2b4d0ef86a52/charset_normalizer-3.4.5-cp312-cp312-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:754f96058e61a5e22e91483f823e07df16416ce76afa4ebf306f8e1d1296d43f", size = 193426, upload-time = "2026-03-06T06:01:23.795Z" }, + { url = "https://files.pythonhosted.org/packages/50/bc/a730690d726403743795ca3f5bb2baf67838c5fea78236098f324b965e40/charset_normalizer-3.4.5-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:0c300cefd9b0970381a46394902cd18eaf2aa00163f999590ace991989dcd0fc", size = 191780, upload-time = "2026-03-06T06:01:25.053Z" }, + { url = "https://files.pythonhosted.org/packages/97/4f/6c0bc9af68222b22951552d73df4532b5be6447cee32d58e7e8c74ecbb7b/charset_normalizer-3.4.5-cp312-cp312-musllinux_1_2_armv7l.whl", hash = "sha256:c108f8619e504140569ee7de3f97d234f0fbae338a7f9f360455071ef9855a95", size = 185805, upload-time = "2026-03-06T06:01:26.294Z" }, + { url = "https://files.pythonhosted.org/packages/dd/b9/a523fb9b0ee90814b503452b2600e4cbc118cd68714d57041564886e7325/charset_normalizer-3.4.5-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:d1028de43596a315e2720a9849ee79007ab742c06ad8b45a50db8cdb7ed4a82a", size = 208342, upload-time = "2026-03-06T06:01:27.55Z" }, + { url = "https://files.pythonhosted.org/packages/4d/61/c59e761dee4464050713e50e27b58266cc8e209e518c0b378c1580c959ba/charset_normalizer-3.4.5-cp312-cp312-musllinux_1_2_riscv64.whl", hash = "sha256:19092dde50335accf365cce21998a1c6dd8eafd42c7b226eb54b2747cdce2fac", size = 193661, upload-time = "2026-03-06T06:01:29.051Z" }, + { url = "https://files.pythonhosted.org/packages/1c/43/729fa30aad69783f755c5ad8649da17ee095311ca42024742701e202dc59/charset_normalizer-3.4.5-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:4354e401eb6dab9aed3c7b4030514328a6c748d05e1c3e19175008ca7de84fb1", size = 204819, upload-time = "2026-03-06T06:01:30.298Z" }, + { url = "https://files.pythonhosted.org/packages/87/33/d9b442ce5a91b96fc0840455a9e49a611bbadae6122778d0a6a79683dd31/charset_normalizer-3.4.5-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:a68766a3c58fde7f9aaa22b3786276f62ab2f594efb02d0a1421b6282e852e98", size = 198080, upload-time = "2026-03-06T06:01:31.478Z" }, + { url = "https://files.pythonhosted.org/packages/56/5a/b8b5a23134978ee9885cee2d6995f4c27cc41f9baded0a9685eabc5338f0/charset_normalizer-3.4.5-cp312-cp312-win32.whl", hash = "sha256:1827734a5b308b65ac54e86a618de66f935a4f63a8a462ff1e19a6788d6c2262", size = 132630, upload-time = "2026-03-06T06:01:33.056Z" }, + { url = "https://files.pythonhosted.org/packages/70/53/e44a4c07e8904500aec95865dc3f6464dc3586a039ef0df606eb3ac38e35/charset_normalizer-3.4.5-cp312-cp312-win_amd64.whl", hash = "sha256:728c6a963dfab66ef865f49286e45239384249672cd598576765acc2a640a636", size = 142856, upload-time = "2026-03-06T06:01:34.489Z" }, + { url = "https://files.pythonhosted.org/packages/ea/aa/c5628f7cad591b1cf45790b7a61483c3e36cf41349c98af7813c483fd6e8/charset_normalizer-3.4.5-cp312-cp312-win_arm64.whl", hash = "sha256:75dfd1afe0b1647449e852f4fb428195a7ed0588947218f7ba929f6538487f02", size = 132982, upload-time = "2026-03-06T06:01:35.641Z" }, + { url = "https://files.pythonhosted.org/packages/f5/48/9f34ec4bb24aa3fdba1890c1bddb97c8a4be1bd84ef5c42ac2352563ad05/charset_normalizer-3.4.5-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:ac59c15e3f1465f722607800c68713f9fbc2f672b9eb649fe831da4019ae9b23", size = 280788, upload-time = "2026-03-06T06:01:37.126Z" }, + { url = "https://files.pythonhosted.org/packages/0e/09/6003e7ffeb90cc0560da893e3208396a44c210c5ee42efff539639def59b/charset_normalizer-3.4.5-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:165c7b21d19365464e8f70e5ce5e12524c58b48c78c1f5a57524603c1ab003f8", size = 188890, upload-time = "2026-03-06T06:01:38.73Z" }, + { url = "https://files.pythonhosted.org/packages/42/1e/02706edf19e390680daa694d17e2b8eab4b5f7ac285e2a51168b4b22ee6b/charset_normalizer-3.4.5-cp313-cp313-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:28269983f25a4da0425743d0d257a2d6921ea7d9b83599d4039486ec5b9f911d", size = 206136, upload-time = "2026-03-06T06:01:40.016Z" }, + { url = "https://files.pythonhosted.org/packages/c7/87/942c3def1b37baf3cf786bad01249190f3ca3d5e63a84f831e704977de1f/charset_normalizer-3.4.5-cp313-cp313-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:d27ce22ec453564770d29d03a9506d449efbb9fa13c00842262b2f6801c48cce", size = 202551, upload-time = "2026-03-06T06:01:41.522Z" }, + { url = "https://files.pythonhosted.org/packages/94/0a/af49691938dfe175d71b8a929bd7e4ace2809c0c5134e28bc535660d5262/charset_normalizer-3.4.5-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:0625665e4ebdddb553ab185de5db7054393af8879fb0c87bd5690d14379d6819", size = 195572, upload-time = "2026-03-06T06:01:43.208Z" }, + { url = "https://files.pythonhosted.org/packages/20/ea/dfb1792a8050a8e694cfbde1570ff97ff74e48afd874152d38163d1df9ae/charset_normalizer-3.4.5-cp313-cp313-manylinux_2_31_armv7l.whl", hash = "sha256:c23eb3263356d94858655b3e63f85ac5d50970c6e8febcdde7830209139cc37d", size = 184438, upload-time = "2026-03-06T06:01:44.755Z" }, + { url = "https://files.pythonhosted.org/packages/72/12/c281e2067466e3ddd0595bfaea58a6946765ace5c72dfa3edc2f5f118026/charset_normalizer-3.4.5-cp313-cp313-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:e6302ca4ae283deb0af68d2fbf467474b8b6aedcd3dab4db187e07f94c109763", size = 193035, upload-time = "2026-03-06T06:01:46.051Z" }, + { url = "https://files.pythonhosted.org/packages/ba/4f/3792c056e7708e10464bad0438a44708886fb8f92e3c3d29ec5e2d964d42/charset_normalizer-3.4.5-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:e51ae7d81c825761d941962450f50d041db028b7278e7b08930b4541b3e45cb9", size = 191340, upload-time = "2026-03-06T06:01:47.547Z" }, + { url = "https://files.pythonhosted.org/packages/e7/86/80ddba897127b5c7a9bccc481b0cd36c8fefa485d113262f0fe4332f0bf4/charset_normalizer-3.4.5-cp313-cp313-musllinux_1_2_armv7l.whl", hash = "sha256:597d10dec876923e5c59e48dbd366e852eacb2b806029491d307daea6b917d7c", size = 185464, upload-time = "2026-03-06T06:01:48.764Z" }, + { url = "https://files.pythonhosted.org/packages/4d/00/b5eff85ba198faacab83e0e4b6f0648155f072278e3b392a82478f8b988b/charset_normalizer-3.4.5-cp313-cp313-musllinux_1_2_ppc64le.whl", hash = "sha256:5cffde4032a197bd3b42fd0b9509ec60fb70918d6970e4cc773f20fc9180ca67", size = 208014, upload-time = "2026-03-06T06:01:50.371Z" }, + { url = "https://files.pythonhosted.org/packages/c8/11/d36f70be01597fd30850dde8a1269ebc8efadd23ba5785808454f2389bde/charset_normalizer-3.4.5-cp313-cp313-musllinux_1_2_riscv64.whl", hash = "sha256:2da4eedcb6338e2321e831a0165759c0c620e37f8cd044a263ff67493be8ffb3", size = 193297, upload-time = "2026-03-06T06:01:51.933Z" }, + { url = "https://files.pythonhosted.org/packages/1a/1d/259eb0a53d4910536c7c2abb9cb25f4153548efb42800c6a9456764649c0/charset_normalizer-3.4.5-cp313-cp313-musllinux_1_2_s390x.whl", hash = "sha256:65a126fb4b070d05340a84fc709dd9e7c75d9b063b610ece8a60197a291d0adf", size = 204321, upload-time = "2026-03-06T06:01:53.887Z" }, + { url = "https://files.pythonhosted.org/packages/84/31/faa6c5b9d3688715e1ed1bb9d124c384fe2fc1633a409e503ffe1c6398c1/charset_normalizer-3.4.5-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:c7a80a9242963416bd81f99349d5f3fce1843c303bd404f204918b6d75a75fd6", size = 197509, upload-time = "2026-03-06T06:01:56.439Z" }, + { url = "https://files.pythonhosted.org/packages/fd/a5/c7d9dd1503ffc08950b3260f5d39ec2366dd08254f0900ecbcf3a6197c7c/charset_normalizer-3.4.5-cp313-cp313-win32.whl", hash = "sha256:f1d725b754e967e648046f00c4facc42d414840f5ccc670c5670f59f83693e4f", size = 132284, upload-time = "2026-03-06T06:01:57.812Z" }, + { url = "https://files.pythonhosted.org/packages/b9/0f/57072b253af40c8aa6636e6de7d75985624c1eb392815b2f934199340a89/charset_normalizer-3.4.5-cp313-cp313-win_amd64.whl", hash = "sha256:e37bd100d2c5d3ba35db9c7c5ba5a9228cbcffe5c4778dc824b164e5257813d7", size = 142630, upload-time = "2026-03-06T06:01:59.062Z" }, + { url = "https://files.pythonhosted.org/packages/31/41/1c4b7cc9f13bd9d369ce3bc993e13d374ce25fa38a2663644283ecf422c1/charset_normalizer-3.4.5-cp313-cp313-win_arm64.whl", hash = "sha256:93b3b2cc5cf1b8743660ce77a4f45f3f6d1172068207c1defc779a36eea6bb36", size = 133254, upload-time = "2026-03-06T06:02:00.281Z" }, + { url = "https://files.pythonhosted.org/packages/43/be/0f0fd9bb4a7fa4fb5067fb7d9ac693d4e928d306f80a0d02bde43a7c4aee/charset_normalizer-3.4.5-cp314-cp314-macosx_10_15_universal2.whl", hash = "sha256:8197abe5ca1ffb7d91e78360f915eef5addff270f8a71c1fc5be24a56f3e4873", size = 280232, upload-time = "2026-03-06T06:02:01.508Z" }, + { url = "https://files.pythonhosted.org/packages/28/02/983b5445e4bef49cd8c9da73a8e029f0825f39b74a06d201bfaa2e55142a/charset_normalizer-3.4.5-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:a2aecdb364b8a1802afdc7f9327d55dad5366bc97d8502d0f5854e50712dbc5f", size = 189688, upload-time = "2026-03-06T06:02:02.857Z" }, + { url = "https://files.pythonhosted.org/packages/d0/88/152745c5166437687028027dc080e2daed6fe11cfa95a22f4602591c42db/charset_normalizer-3.4.5-cp314-cp314-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:a66aa5022bf81ab4b1bebfb009db4fd68e0c6d4307a1ce5ef6a26e5878dfc9e4", size = 206833, upload-time = "2026-03-06T06:02:05.127Z" }, + { url = "https://files.pythonhosted.org/packages/cb/0f/ebc15c8b02af2f19be9678d6eed115feeeccc45ce1f4b098d986c13e8769/charset_normalizer-3.4.5-cp314-cp314-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:d77f97e515688bd615c1d1f795d540f32542d514242067adcb8ef532504cb9ee", size = 202879, upload-time = "2026-03-06T06:02:06.446Z" }, + { url = "https://files.pythonhosted.org/packages/38/9c/71336bff6934418dc8d1e8a1644176ac9088068bc571da612767619c97b3/charset_normalizer-3.4.5-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:01a1ed54b953303ca7e310fafe0fe347aab348bd81834a0bcd602eb538f89d66", size = 195764, upload-time = "2026-03-06T06:02:08.763Z" }, + { url = "https://files.pythonhosted.org/packages/b7/95/ce92fde4f98615661871bc282a856cf9b8a15f686ba0af012984660d480b/charset_normalizer-3.4.5-cp314-cp314-manylinux_2_31_armv7l.whl", hash = "sha256:b2d37d78297b39a9eb9eb92c0f6df98c706467282055419df141389b23f93362", size = 183728, upload-time = "2026-03-06T06:02:10.137Z" }, + { url = "https://files.pythonhosted.org/packages/1c/e7/f5b4588d94e747ce45ae680f0f242bc2d98dbd4eccfab73e6160b6893893/charset_normalizer-3.4.5-cp314-cp314-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:e71bbb595973622b817c042bd943c3f3667e9c9983ce3d205f973f486fec98a7", size = 192937, upload-time = "2026-03-06T06:02:11.663Z" }, + { url = "https://files.pythonhosted.org/packages/f9/29/9d94ed6b929bf9f48bf6ede6e7474576499f07c4c5e878fb186083622716/charset_normalizer-3.4.5-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:4cd966c2559f501c6fd69294d082c2934c8dd4719deb32c22961a5ac6db0df1d", size = 192040, upload-time = "2026-03-06T06:02:13.489Z" }, + { url = "https://files.pythonhosted.org/packages/15/d2/1a093a1cf827957f9445f2fe7298bcc16f8fc5e05c1ed2ad1af0b239035e/charset_normalizer-3.4.5-cp314-cp314-musllinux_1_2_armv7l.whl", hash = "sha256:d5e52d127045d6ae01a1e821acfad2f3a1866c54d0e837828538fabe8d9d1bd6", size = 184107, upload-time = "2026-03-06T06:02:14.83Z" }, + { url = "https://files.pythonhosted.org/packages/0f/7d/82068ce16bd36135df7b97f6333c5d808b94e01d4599a682e2337ed5fd14/charset_normalizer-3.4.5-cp314-cp314-musllinux_1_2_ppc64le.whl", hash = "sha256:30a2b1a48478c3428d047ed9690d57c23038dac838a87ad624c85c0a78ebeb39", size = 208310, upload-time = "2026-03-06T06:02:16.165Z" }, + { url = "https://files.pythonhosted.org/packages/84/4e/4dfb52307bb6af4a5c9e73e482d171b81d36f522b21ccd28a49656baa680/charset_normalizer-3.4.5-cp314-cp314-musllinux_1_2_riscv64.whl", hash = "sha256:d8ed79b8f6372ca4254955005830fd61c1ccdd8c0fac6603e2c145c61dd95db6", size = 192918, upload-time = "2026-03-06T06:02:18.144Z" }, + { url = "https://files.pythonhosted.org/packages/08/a4/159ff7da662cf7201502ca89980b8f06acf3e887b278956646a8aeb178ab/charset_normalizer-3.4.5-cp314-cp314-musllinux_1_2_s390x.whl", hash = "sha256:c5af897b45fa606b12464ccbe0014bbf8c09191e0a66aab6aa9d5cf6e77e0c94", size = 204615, upload-time = "2026-03-06T06:02:19.821Z" }, + { url = "https://files.pythonhosted.org/packages/d6/62/0dd6172203cb6b429ffffc9935001fde42e5250d57f07b0c28c6046deb6b/charset_normalizer-3.4.5-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:1088345bcc93c58d8d8f3d783eca4a6e7a7752bbff26c3eee7e73c597c191c2e", size = 197784, upload-time = "2026-03-06T06:02:21.86Z" }, + { url = "https://files.pythonhosted.org/packages/c7/5e/1aab5cb737039b9c59e63627dc8bbc0d02562a14f831cc450e5f91d84ce1/charset_normalizer-3.4.5-cp314-cp314-win32.whl", hash = "sha256:ee57b926940ba00bca7ba7041e665cc956e55ef482f851b9b65acb20d867e7a2", size = 133009, upload-time = "2026-03-06T06:02:23.289Z" }, + { url = "https://files.pythonhosted.org/packages/40/65/e7c6c77d7aaa4c0d7974f2e403e17f0ed2cb0fc135f77d686b916bf1eead/charset_normalizer-3.4.5-cp314-cp314-win_amd64.whl", hash = "sha256:4481e6da1830c8a1cc0b746b47f603b653dadb690bcd851d039ffaefe70533aa", size = 143511, upload-time = "2026-03-06T06:02:26.195Z" }, + { url = "https://files.pythonhosted.org/packages/ba/91/52b0841c71f152f563b8e072896c14e3d83b195c188b338d3cc2e582d1d4/charset_normalizer-3.4.5-cp314-cp314-win_arm64.whl", hash = "sha256:97ab7787092eb9b50fb47fa04f24c75b768a606af1bcba1957f07f128a7219e4", size = 133775, upload-time = "2026-03-06T06:02:27.473Z" }, + { url = "https://files.pythonhosted.org/packages/c5/60/3a621758945513adfd4db86827a5bafcc615f913dbd0b4c2ed64a65731be/charset_normalizer-3.4.5-py3-none-any.whl", hash = "sha256:9db5e3fcdcee89a78c04dffb3fe33c79f77bd741a624946db2591c81b2fc85b0", size = 55455, upload-time = "2026-03-06T06:03:17.827Z" }, +] + +[[package]] +name = "colorama" +version = "0.4.6" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/d8/53/6f443c9a4a8358a93a6792e2acffb9d9d5cb0a5cfd8802644b7b1c9a02e4/colorama-0.4.6.tar.gz", hash = "sha256:08695f5cb7ed6e0531a20572697297273c47b8cae5a63ffc6d6ed5c201be6e44", size = 27697, upload-time = "2022-10-25T02:36:22.414Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/d1/d6/3965ed04c63042e047cb6a3e6ed1a63a35087b6a609aa3a15ed8ac56c221/colorama-0.4.6-py2.py3-none-any.whl", hash = "sha256:4f1d9991f5acc0ca119f9d443620b77f9d6b33703e51011c16baf57afb285fc6", size = 25335, upload-time = "2022-10-25T02:36:20.889Z" }, +] + +[[package]] +name = "coverage" +version = "7.13.4" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/24/56/95b7e30fa389756cb56630faa728da46a27b8c6eb46f9d557c68fff12b65/coverage-7.13.4.tar.gz", hash = "sha256:e5c8f6ed1e61a8b2dcdf31eb0b9bbf0130750ca79c1c49eb898e2ad86f5ccc91", size = 827239, upload-time = "2026-02-09T12:59:03.86Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/b4/ad/b59e5b451cf7172b8d1043dc0fa718f23aab379bc1521ee13d4bd9bfa960/coverage-7.13.4-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:d490ba50c3f35dd7c17953c68f3270e7ccd1c6642e2d2afe2d8e720b98f5a053", size = 219278, upload-time = "2026-02-09T12:56:31.673Z" }, + { url = "https://files.pythonhosted.org/packages/f1/17/0cb7ca3de72e5f4ef2ec2fa0089beafbcaaaead1844e8b8a63d35173d77d/coverage-7.13.4-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:19bc3c88078789f8ef36acb014d7241961dbf883fd2533d18cb1e7a5b4e28b11", size = 219783, upload-time = "2026-02-09T12:56:33.104Z" }, + { url = "https://files.pythonhosted.org/packages/ab/63/325d8e5b11e0eaf6d0f6a44fad444ae58820929a9b0de943fa377fe73e85/coverage-7.13.4-cp311-cp311-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:3998e5a32e62fdf410c0dbd3115df86297995d6e3429af80b8798aad894ca7aa", size = 250200, upload-time = "2026-02-09T12:56:34.474Z" }, + { url = "https://files.pythonhosted.org/packages/76/53/c16972708cbb79f2942922571a687c52bd109a7bd51175aeb7558dff2236/coverage-7.13.4-cp311-cp311-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:8e264226ec98e01a8e1054314af91ee6cde0eacac4f465cc93b03dbe0bce2fd7", size = 252114, upload-time = "2026-02-09T12:56:35.749Z" }, + { url = "https://files.pythonhosted.org/packages/eb/c2/7ab36d8b8cc412bec9ea2d07c83c48930eb4ba649634ba00cb7e4e0f9017/coverage-7.13.4-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:a3aa4e7b9e416774b21797365b358a6e827ffadaaca81b69ee02946852449f00", size = 254220, upload-time = "2026-02-09T12:56:37.796Z" }, + { url = "https://files.pythonhosted.org/packages/d6/4d/cf52c9a3322c89a0e6febdfbc83bb45c0ed3c64ad14081b9503adee702e7/coverage-7.13.4-cp311-cp311-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:71ca20079dd8f27fcf808817e281e90220475cd75115162218d0e27549f95fef", size = 256164, upload-time = "2026-02-09T12:56:39.016Z" }, + { url = "https://files.pythonhosted.org/packages/78/e9/eb1dd17bd6de8289df3580e967e78294f352a5df8a57ff4671ee5fc3dcd0/coverage-7.13.4-cp311-cp311-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:e2f25215f1a359ab17320b47bcdaca3e6e6356652e8256f2441e4ef972052903", size = 250325, upload-time = "2026-02-09T12:56:40.668Z" }, + { url = "https://files.pythonhosted.org/packages/71/07/8c1542aa873728f72267c07278c5cc0ec91356daf974df21335ccdb46368/coverage-7.13.4-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:d65b2d373032411e86960604dc4edac91fdfb5dca539461cf2cbe78327d1e64f", size = 251913, upload-time = "2026-02-09T12:56:41.97Z" }, + { url = "https://files.pythonhosted.org/packages/74/d7/c62e2c5e4483a748e27868e4c32ad3daa9bdddbba58e1bc7a15e252baa74/coverage-7.13.4-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:94eb63f9b363180aff17de3e7c8760c3ba94664ea2695c52f10111244d16a299", size = 249974, upload-time = "2026-02-09T12:56:43.323Z" }, + { url = "https://files.pythonhosted.org/packages/98/9f/4c5c015a6e98ced54efd0f5cf8d31b88e5504ecb6857585fc0161bb1e600/coverage-7.13.4-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:e856bf6616714c3a9fbc270ab54103f4e685ba236fa98c054e8f87f266c93505", size = 253741, upload-time = "2026-02-09T12:56:45.155Z" }, + { url = "https://files.pythonhosted.org/packages/bd/59/0f4eef89b9f0fcd9633b5d350016f54126ab49426a70ff4c4e87446cabdc/coverage-7.13.4-cp311-cp311-musllinux_1_2_riscv64.whl", hash = "sha256:65dfcbe305c3dfe658492df2d85259e0d79ead4177f9ae724b6fb245198f55d6", size = 249695, upload-time = "2026-02-09T12:56:46.636Z" }, + { url = "https://files.pythonhosted.org/packages/b5/2c/b7476f938deb07166f3eb281a385c262675d688ff4659ad56c6c6b8e2e70/coverage-7.13.4-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:b507778ae8a4c915436ed5c2e05b4a6cecfa70f734e19c22a005152a11c7b6a9", size = 250599, upload-time = "2026-02-09T12:56:48.13Z" }, + { url = "https://files.pythonhosted.org/packages/b8/34/c3420709d9846ee3785b9f2831b4d94f276f38884032dca1457fa83f7476/coverage-7.13.4-cp311-cp311-win32.whl", hash = "sha256:784fc3cf8be001197b652d51d3fd259b1e2262888693a4636e18879f613a62a9", size = 221780, upload-time = "2026-02-09T12:56:50.479Z" }, + { url = "https://files.pythonhosted.org/packages/61/08/3d9c8613079d2b11c185b865de9a4c1a68850cfda2b357fae365cf609f29/coverage-7.13.4-cp311-cp311-win_amd64.whl", hash = "sha256:2421d591f8ca05b308cf0092807308b2facbefe54af7c02ac22548b88b95c98f", size = 222715, upload-time = "2026-02-09T12:56:51.815Z" }, + { url = "https://files.pythonhosted.org/packages/18/1a/54c3c80b2f056164cc0a6cdcb040733760c7c4be9d780fe655f356f433e4/coverage-7.13.4-cp311-cp311-win_arm64.whl", hash = "sha256:79e73a76b854d9c6088fe5d8b2ebe745f8681c55f7397c3c0a016192d681045f", size = 221385, upload-time = "2026-02-09T12:56:53.194Z" }, + { url = "https://files.pythonhosted.org/packages/d1/81/4ce2fdd909c5a0ed1f6dedb88aa57ab79b6d1fbd9b588c1ac7ef45659566/coverage-7.13.4-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:02231499b08dabbe2b96612993e5fc34217cdae907a51b906ac7fca8027a4459", size = 219449, upload-time = "2026-02-09T12:56:54.889Z" }, + { url = "https://files.pythonhosted.org/packages/5d/96/5238b1efc5922ddbdc9b0db9243152c09777804fb7c02ad1741eb18a11c0/coverage-7.13.4-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:40aa8808140e55dc022b15d8aa7f651b6b3d68b365ea0398f1441e0b04d859c3", size = 219810, upload-time = "2026-02-09T12:56:56.33Z" }, + { url = "https://files.pythonhosted.org/packages/78/72/2f372b726d433c9c35e56377cf1d513b4c16fe51841060d826b95caacec1/coverage-7.13.4-cp312-cp312-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:5b856a8ccf749480024ff3bd7310adaef57bf31fd17e1bfc404b7940b6986634", size = 251308, upload-time = "2026-02-09T12:56:57.858Z" }, + { url = "https://files.pythonhosted.org/packages/5d/a0/2ea570925524ef4e00bb6c82649f5682a77fac5ab910a65c9284de422600/coverage-7.13.4-cp312-cp312-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:2c048ea43875fbf8b45d476ad79f179809c590ec7b79e2035c662e7afa3192e3", size = 254052, upload-time = "2026-02-09T12:56:59.754Z" }, + { url = "https://files.pythonhosted.org/packages/e8/ac/45dc2e19a1939098d783c846e130b8f862fbb50d09e0af663988f2f21973/coverage-7.13.4-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:b7b38448866e83176e28086674fe7368ab8590e4610fb662b44e345b86d63ffa", size = 255165, upload-time = "2026-02-09T12:57:01.287Z" }, + { url = "https://files.pythonhosted.org/packages/2d/4d/26d236ff35abc3b5e63540d3386e4c3b192168c1d96da5cb2f43c640970f/coverage-7.13.4-cp312-cp312-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:de6defc1c9badbf8b9e67ae90fd00519186d6ab64e5cc5f3d21359c2a9b2c1d3", size = 257432, upload-time = "2026-02-09T12:57:02.637Z" }, + { url = "https://files.pythonhosted.org/packages/ec/55/14a966c757d1348b2e19caf699415a2a4c4f7feaa4bbc6326a51f5c7dd1b/coverage-7.13.4-cp312-cp312-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:7eda778067ad7ffccd23ecffce537dface96212576a07924cbf0d8799d2ded5a", size = 251716, upload-time = "2026-02-09T12:57:04.056Z" }, + { url = "https://files.pythonhosted.org/packages/77/33/50116647905837c66d28b2af1321b845d5f5d19be9655cb84d4a0ea806b4/coverage-7.13.4-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:e87f6c587c3f34356c3759f0420693e35e7eb0e2e41e4c011cb6ec6ecbbf1db7", size = 253089, upload-time = "2026-02-09T12:57:05.503Z" }, + { url = "https://files.pythonhosted.org/packages/c2/b4/8efb11a46e3665d92635a56e4f2d4529de6d33f2cb38afd47d779d15fc99/coverage-7.13.4-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:8248977c2e33aecb2ced42fef99f2d319e9904a36e55a8a68b69207fb7e43edc", size = 251232, upload-time = "2026-02-09T12:57:06.879Z" }, + { url = "https://files.pythonhosted.org/packages/51/24/8cd73dd399b812cc76bb0ac260e671c4163093441847ffe058ac9fda1e32/coverage-7.13.4-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:25381386e80ae727608e662474db537d4df1ecd42379b5ba33c84633a2b36d47", size = 255299, upload-time = "2026-02-09T12:57:08.245Z" }, + { url = "https://files.pythonhosted.org/packages/03/94/0a4b12f1d0e029ce1ccc1c800944a9984cbe7d678e470bb6d3c6bc38a0da/coverage-7.13.4-cp312-cp312-musllinux_1_2_riscv64.whl", hash = "sha256:ee756f00726693e5ba94d6df2bdfd64d4852d23b09bb0bc700e3b30e6f333985", size = 250796, upload-time = "2026-02-09T12:57:10.142Z" }, + { url = "https://files.pythonhosted.org/packages/73/44/6002fbf88f6698ca034360ce474c406be6d5a985b3fdb3401128031eef6b/coverage-7.13.4-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:fdfc1e28e7c7cdce44985b3043bc13bbd9c747520f94a4d7164af8260b3d91f0", size = 252673, upload-time = "2026-02-09T12:57:12.197Z" }, + { url = "https://files.pythonhosted.org/packages/de/c6/a0279f7c00e786be75a749a5674e6fa267bcbd8209cd10c9a450c655dfa7/coverage-7.13.4-cp312-cp312-win32.whl", hash = "sha256:01d4cbc3c283a17fc1e42d614a119f7f438eabb593391283adca8dc86eff1246", size = 221990, upload-time = "2026-02-09T12:57:14.085Z" }, + { url = "https://files.pythonhosted.org/packages/77/4e/c0a25a425fcf5557d9abd18419c95b63922e897bc86c1f327f155ef234a9/coverage-7.13.4-cp312-cp312-win_amd64.whl", hash = "sha256:9401ebc7ef522f01d01d45532c68c5ac40fb27113019b6b7d8b208f6e9baa126", size = 222800, upload-time = "2026-02-09T12:57:15.944Z" }, + { url = "https://files.pythonhosted.org/packages/47/ac/92da44ad9a6f4e3a7debd178949d6f3769bedca33830ce9b1dcdab589a37/coverage-7.13.4-cp312-cp312-win_arm64.whl", hash = "sha256:b1ec7b6b6e93255f952e27ab58fbc68dcc468844b16ecbee881aeb29b6ab4d8d", size = 221415, upload-time = "2026-02-09T12:57:17.497Z" }, + { url = "https://files.pythonhosted.org/packages/db/23/aad45061a31677d68e47499197a131eea55da4875d16c1f42021ab963503/coverage-7.13.4-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:b66a2da594b6068b48b2692f043f35d4d3693fb639d5ea8b39533c2ad9ac3ab9", size = 219474, upload-time = "2026-02-09T12:57:19.332Z" }, + { url = "https://files.pythonhosted.org/packages/a5/70/9b8b67a0945f3dfec1fd896c5cefb7c19d5a3a6d74630b99a895170999ae/coverage-7.13.4-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:3599eb3992d814d23b35c536c28df1a882caa950f8f507cef23d1cbf334995ac", size = 219844, upload-time = "2026-02-09T12:57:20.66Z" }, + { url = "https://files.pythonhosted.org/packages/97/fd/7e859f8fab324cef6c4ad7cff156ca7c489fef9179d5749b0c8d321281c2/coverage-7.13.4-cp313-cp313-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:93550784d9281e374fb5a12bf1324cc8a963fd63b2d2f223503ef0fd4aa339ea", size = 250832, upload-time = "2026-02-09T12:57:22.007Z" }, + { url = "https://files.pythonhosted.org/packages/e4/dc/b2442d10020c2f52617828862d8b6ee337859cd8f3a1f13d607dddda9cf7/coverage-7.13.4-cp313-cp313-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:b720ce6a88a2755f7c697c23268ddc47a571b88052e6b155224347389fdf6a3b", size = 253434, upload-time = "2026-02-09T12:57:23.339Z" }, + { url = "https://files.pythonhosted.org/packages/5a/88/6728a7ad17428b18d836540630487231f5470fb82454871149502f5e5aa2/coverage-7.13.4-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:7b322db1284a2ed3aa28ffd8ebe3db91c929b7a333c0820abec3d838ef5b3525", size = 254676, upload-time = "2026-02-09T12:57:24.774Z" }, + { url = "https://files.pythonhosted.org/packages/7c/bc/21244b1b8cedf0dff0a2b53b208015fe798d5f2a8d5348dbfece04224fff/coverage-7.13.4-cp313-cp313-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:f4594c67d8a7c89cf922d9df0438c7c7bb022ad506eddb0fdb2863359ff78242", size = 256807, upload-time = "2026-02-09T12:57:26.125Z" }, + { url = "https://files.pythonhosted.org/packages/97/a0/ddba7ed3251cff51006737a727d84e05b61517d1784a9988a846ba508877/coverage-7.13.4-cp313-cp313-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:53d133df809c743eb8bce33b24bcababb371f4441340578cd406e084d94a6148", size = 251058, upload-time = "2026-02-09T12:57:27.614Z" }, + { url = "https://files.pythonhosted.org/packages/9b/55/e289addf7ff54d3a540526f33751951bf0878f3809b47f6dfb3def69c6f7/coverage-7.13.4-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:76451d1978b95ba6507a039090ba076105c87cc76fc3efd5d35d72093964d49a", size = 252805, upload-time = "2026-02-09T12:57:29.066Z" }, + { url = "https://files.pythonhosted.org/packages/13/4e/cc276b1fa4a59be56d96f1dabddbdc30f4ba22e3b1cd42504c37b3313255/coverage-7.13.4-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:7f57b33491e281e962021de110b451ab8a24182589be17e12a22c79047935e23", size = 250766, upload-time = "2026-02-09T12:57:30.522Z" }, + { url = "https://files.pythonhosted.org/packages/94/44/1093b8f93018f8b41a8cf29636c9292502f05e4a113d4d107d14a3acd044/coverage-7.13.4-cp313-cp313-musllinux_1_2_ppc64le.whl", hash = "sha256:1731dc33dc276dafc410a885cbf5992f1ff171393e48a21453b78727d090de80", size = 254923, upload-time = "2026-02-09T12:57:31.946Z" }, + { url = "https://files.pythonhosted.org/packages/8b/55/ea2796da2d42257f37dbea1aab239ba9263b31bd91d5527cdd6db5efe174/coverage-7.13.4-cp313-cp313-musllinux_1_2_riscv64.whl", hash = "sha256:bd60d4fe2f6fa7dff9223ca1bbc9f05d2b6697bc5961072e5d3b952d46e1b1ea", size = 250591, upload-time = "2026-02-09T12:57:33.842Z" }, + { url = "https://files.pythonhosted.org/packages/d4/fa/7c4bb72aacf8af5020675aa633e59c1fbe296d22aed191b6a5b711eb2bc7/coverage-7.13.4-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:9181a3ccead280b828fae232df12b16652702b49d41e99d657f46cc7b1f6ec7a", size = 252364, upload-time = "2026-02-09T12:57:35.743Z" }, + { url = "https://files.pythonhosted.org/packages/5c/38/a8d2ec0146479c20bbaa7181b5b455a0c41101eed57f10dd19a78ab44c80/coverage-7.13.4-cp313-cp313-win32.whl", hash = "sha256:f53d492307962561ac7de4cd1de3e363589b000ab69617c6156a16ba7237998d", size = 222010, upload-time = "2026-02-09T12:57:37.25Z" }, + { url = "https://files.pythonhosted.org/packages/e2/0c/dbfafbe90a185943dcfbc766fe0e1909f658811492d79b741523a414a6cc/coverage-7.13.4-cp313-cp313-win_amd64.whl", hash = "sha256:e6f70dec1cc557e52df5306d051ef56003f74d56e9c4dd7ddb07e07ef32a84dd", size = 222818, upload-time = "2026-02-09T12:57:38.734Z" }, + { url = "https://files.pythonhosted.org/packages/04/d1/934918a138c932c90d78301f45f677fb05c39a3112b96fd2c8e60503cdc7/coverage-7.13.4-cp313-cp313-win_arm64.whl", hash = "sha256:fb07dc5da7e849e2ad31a5d74e9bece81f30ecf5a42909d0a695f8bd1874d6af", size = 221438, upload-time = "2026-02-09T12:57:40.223Z" }, + { url = "https://files.pythonhosted.org/packages/52/57/ee93ced533bcb3e6df961c0c6e42da2fc6addae53fb95b94a89b1e33ebd7/coverage-7.13.4-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:40d74da8e6c4b9ac18b15331c4b5ebc35a17069410cad462ad4f40dcd2d50c0d", size = 220165, upload-time = "2026-02-09T12:57:41.639Z" }, + { url = "https://files.pythonhosted.org/packages/c5/e0/969fc285a6fbdda49d91af278488d904dcd7651b2693872f0ff94e40e84a/coverage-7.13.4-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:4223b4230a376138939a9173f1bdd6521994f2aff8047fae100d6d94d50c5a12", size = 220516, upload-time = "2026-02-09T12:57:44.215Z" }, + { url = "https://files.pythonhosted.org/packages/b1/b8/9531944e16267e2735a30a9641ff49671f07e8138ecf1ca13db9fd2560c7/coverage-7.13.4-cp313-cp313t-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:1d4be36a5114c499f9f1f9195e95ebf979460dbe2d88e6816ea202010ba1c34b", size = 261804, upload-time = "2026-02-09T12:57:45.989Z" }, + { url = "https://files.pythonhosted.org/packages/8a/f3/e63df6d500314a2a60390d1989240d5f27318a7a68fa30ad3806e2a9323e/coverage-7.13.4-cp313-cp313t-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:200dea7d1e8095cc6e98cdabe3fd1d21ab17d3cee6dab00cadbb2fe35d9c15b9", size = 263885, upload-time = "2026-02-09T12:57:47.42Z" }, + { url = "https://files.pythonhosted.org/packages/f3/67/7654810de580e14b37670b60a09c599fa348e48312db5b216d730857ffe6/coverage-7.13.4-cp313-cp313t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:b8eb931ee8e6d8243e253e5ed7336deea6904369d2fd8ae6e43f68abbf167092", size = 266308, upload-time = "2026-02-09T12:57:49.345Z" }, + { url = "https://files.pythonhosted.org/packages/37/6f/39d41eca0eab3cc82115953ad41c4e77935286c930e8fad15eaed1389d83/coverage-7.13.4-cp313-cp313t-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:75eab1ebe4f2f64d9509b984f9314d4aa788540368218b858dad56dc8f3e5eb9", size = 267452, upload-time = "2026-02-09T12:57:50.811Z" }, + { url = "https://files.pythonhosted.org/packages/50/6d/39c0fbb8fc5cd4d2090811e553c2108cf5112e882f82505ee7495349a6bf/coverage-7.13.4-cp313-cp313t-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:c35eb28c1d085eb7d8c9b3296567a1bebe03ce72962e932431b9a61f28facf26", size = 261057, upload-time = "2026-02-09T12:57:52.447Z" }, + { url = "https://files.pythonhosted.org/packages/a4/a2/60010c669df5fa603bb5a97fb75407e191a846510da70ac657eb696b7fce/coverage-7.13.4-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:eb88b316ec33760714a4720feb2816a3a59180fd58c1985012054fa7aebee4c2", size = 263875, upload-time = "2026-02-09T12:57:53.938Z" }, + { url = "https://files.pythonhosted.org/packages/3e/d9/63b22a6bdbd17f1f96e9ed58604c2a6b0e72a9133e37d663bef185877cf6/coverage-7.13.4-cp313-cp313t-musllinux_1_2_i686.whl", hash = "sha256:7d41eead3cc673cbd38a4417deb7fd0b4ca26954ff7dc6078e33f6ff97bed940", size = 261500, upload-time = "2026-02-09T12:57:56.012Z" }, + { url = "https://files.pythonhosted.org/packages/70/bf/69f86ba1ad85bc3ad240e4c0e57a2e620fbc0e1645a47b5c62f0e941ad7f/coverage-7.13.4-cp313-cp313t-musllinux_1_2_ppc64le.whl", hash = "sha256:fb26a934946a6afe0e326aebe0730cdff393a8bc0bbb65a2f41e30feddca399c", size = 265212, upload-time = "2026-02-09T12:57:57.5Z" }, + { url = "https://files.pythonhosted.org/packages/ae/f2/5f65a278a8c2148731831574c73e42f57204243d33bedaaf18fa79c5958f/coverage-7.13.4-cp313-cp313t-musllinux_1_2_riscv64.whl", hash = "sha256:dae88bc0fc77edaa65c14be099bd57ee140cf507e6bfdeea7938457ab387efb0", size = 260398, upload-time = "2026-02-09T12:57:59.027Z" }, + { url = "https://files.pythonhosted.org/packages/ef/80/6e8280a350ee9fea92f14b8357448a242dcaa243cb2c72ab0ca591f66c8c/coverage-7.13.4-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:845f352911777a8e722bfce168958214951e07e47e5d5d9744109fa5fe77f79b", size = 262584, upload-time = "2026-02-09T12:58:01.129Z" }, + { url = "https://files.pythonhosted.org/packages/22/63/01ff182fc95f260b539590fb12c11ad3e21332c15f9799cb5e2386f71d9f/coverage-7.13.4-cp313-cp313t-win32.whl", hash = "sha256:2fa8d5f8de70688a28240de9e139fa16b153cc3cbb01c5f16d88d6505ebdadf9", size = 222688, upload-time = "2026-02-09T12:58:02.736Z" }, + { url = "https://files.pythonhosted.org/packages/a9/43/89de4ef5d3cd53b886afa114065f7e9d3707bdb3e5efae13535b46ae483d/coverage-7.13.4-cp313-cp313t-win_amd64.whl", hash = "sha256:9351229c8c8407645840edcc277f4a2d44814d1bc34a2128c11c2a031d45a5dd", size = 223746, upload-time = "2026-02-09T12:58:05.362Z" }, + { url = "https://files.pythonhosted.org/packages/35/39/7cf0aa9a10d470a5309b38b289b9bb07ddeac5d61af9b664fe9775a4cb3e/coverage-7.13.4-cp313-cp313t-win_arm64.whl", hash = "sha256:30b8d0512f2dc8c8747557e8fb459d6176a2c9e5731e2b74d311c03b78451997", size = 222003, upload-time = "2026-02-09T12:58:06.952Z" }, + { url = "https://files.pythonhosted.org/packages/92/11/a9cf762bb83386467737d32187756a42094927150c3e107df4cb078e8590/coverage-7.13.4-cp314-cp314-macosx_10_15_x86_64.whl", hash = "sha256:300deaee342f90696ed186e3a00c71b5b3d27bffe9e827677954f4ee56969601", size = 219522, upload-time = "2026-02-09T12:58:08.623Z" }, + { url = "https://files.pythonhosted.org/packages/d3/28/56e6d892b7b052236d67c95f1936b6a7cf7c3e2634bf27610b8cbd7f9c60/coverage-7.13.4-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:29e3220258d682b6226a9b0925bc563ed9a1ebcff3cad30f043eceea7eaf2689", size = 219855, upload-time = "2026-02-09T12:58:10.176Z" }, + { url = "https://files.pythonhosted.org/packages/e5/69/233459ee9eb0c0d10fcc2fe425a029b3fa5ce0f040c966ebce851d030c70/coverage-7.13.4-cp314-cp314-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:391ee8f19bef69210978363ca930f7328081c6a0152f1166c91f0b5fdd2a773c", size = 250887, upload-time = "2026-02-09T12:58:12.503Z" }, + { url = "https://files.pythonhosted.org/packages/06/90/2cdab0974b9b5bbc1623f7876b73603aecac11b8d95b85b5b86b32de5eab/coverage-7.13.4-cp314-cp314-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:0dd7ab8278f0d58a0128ba2fca25824321f05d059c1441800e934ff2efa52129", size = 253396, upload-time = "2026-02-09T12:58:14.615Z" }, + { url = "https://files.pythonhosted.org/packages/ac/15/ea4da0f85bf7d7b27635039e649e99deb8173fe551096ea15017f7053537/coverage-7.13.4-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:78cdf0d578b15148b009ccf18c686aa4f719d887e76e6b40c38ffb61d264a552", size = 254745, upload-time = "2026-02-09T12:58:16.162Z" }, + { url = "https://files.pythonhosted.org/packages/99/11/bb356e86920c655ca4d61daee4e2bbc7258f0a37de0be32d233b561134ff/coverage-7.13.4-cp314-cp314-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:48685fee12c2eb3b27c62f2658e7ea21e9c3239cba5a8a242801a0a3f6a8c62a", size = 257055, upload-time = "2026-02-09T12:58:17.892Z" }, + { url = "https://files.pythonhosted.org/packages/c9/0f/9ae1f8cb17029e09da06ca4e28c9e1d5c1c0a511c7074592e37e0836c915/coverage-7.13.4-cp314-cp314-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:4e83efc079eb39480e6346a15a1bcb3e9b04759c5202d157e1dd4303cd619356", size = 250911, upload-time = "2026-02-09T12:58:19.495Z" }, + { url = "https://files.pythonhosted.org/packages/89/3a/adfb68558fa815cbc29747b553bc833d2150228f251b127f1ce97e48547c/coverage-7.13.4-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:ecae9737b72408d6a950f7e525f30aca12d4bd8dd95e37342e5beb3a2a8c4f71", size = 252754, upload-time = "2026-02-09T12:58:21.064Z" }, + { url = "https://files.pythonhosted.org/packages/32/b1/540d0c27c4e748bd3cd0bd001076ee416eda993c2bae47a73b7cc9357931/coverage-7.13.4-cp314-cp314-musllinux_1_2_i686.whl", hash = "sha256:ae4578f8528569d3cf303fef2ea569c7f4c4059a38c8667ccef15c6e1f118aa5", size = 250720, upload-time = "2026-02-09T12:58:22.622Z" }, + { url = "https://files.pythonhosted.org/packages/c7/95/383609462b3ffb1fe133014a7c84fc0dd01ed55ac6140fa1093b5af7ebb1/coverage-7.13.4-cp314-cp314-musllinux_1_2_ppc64le.whl", hash = "sha256:6fdef321fdfbb30a197efa02d48fcd9981f0d8ad2ae8903ac318adc653f5df98", size = 254994, upload-time = "2026-02-09T12:58:24.548Z" }, + { url = "https://files.pythonhosted.org/packages/f7/ba/1761138e86c81680bfc3c49579d66312865457f9fe405b033184e5793cb3/coverage-7.13.4-cp314-cp314-musllinux_1_2_riscv64.whl", hash = "sha256:2b0f6ccf3dbe577170bebfce1318707d0e8c3650003cb4b3a9dd744575daa8b5", size = 250531, upload-time = "2026-02-09T12:58:26.271Z" }, + { url = "https://files.pythonhosted.org/packages/f8/8e/05900df797a9c11837ab59c4d6fe94094e029582aab75c3309a93e6fb4e3/coverage-7.13.4-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:75fcd519f2a5765db3f0e391eb3b7d150cce1a771bf4c9f861aeab86c767a3c0", size = 252189, upload-time = "2026-02-09T12:58:27.807Z" }, + { url = "https://files.pythonhosted.org/packages/00/bd/29c9f2db9ea4ed2738b8a9508c35626eb205d51af4ab7bf56a21a2e49926/coverage-7.13.4-cp314-cp314-win32.whl", hash = "sha256:8e798c266c378da2bd819b0677df41ab46d78065fb2a399558f3f6cae78b2fbb", size = 222258, upload-time = "2026-02-09T12:58:29.441Z" }, + { url = "https://files.pythonhosted.org/packages/a7/4d/1f8e723f6829977410efeb88f73673d794075091c8c7c18848d273dc9d73/coverage-7.13.4-cp314-cp314-win_amd64.whl", hash = "sha256:245e37f664d89861cf2329c9afa2c1fe9e6d4e1a09d872c947e70718aeeac505", size = 223073, upload-time = "2026-02-09T12:58:31.026Z" }, + { url = "https://files.pythonhosted.org/packages/51/5b/84100025be913b44e082ea32abcf1afbf4e872f5120b7a1cab1d331b1e13/coverage-7.13.4-cp314-cp314-win_arm64.whl", hash = "sha256:ad27098a189e5838900ce4c2a99f2fe42a0bf0c2093c17c69b45a71579e8d4a2", size = 221638, upload-time = "2026-02-09T12:58:32.599Z" }, + { url = "https://files.pythonhosted.org/packages/a7/e4/c884a405d6ead1370433dad1e3720216b4f9fd8ef5b64bfd984a2a60a11a/coverage-7.13.4-cp314-cp314t-macosx_10_15_x86_64.whl", hash = "sha256:85480adfb35ffc32d40918aad81b89c69c9cc5661a9b8a81476d3e645321a056", size = 220246, upload-time = "2026-02-09T12:58:34.181Z" }, + { url = "https://files.pythonhosted.org/packages/81/5c/4d7ed8b23b233b0fffbc9dfec53c232be2e695468523242ea9fd30f97ad2/coverage-7.13.4-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:79be69cf7f3bf9b0deeeb062eab7ac7f36cd4cc4c4dd694bd28921ba4d8596cc", size = 220514, upload-time = "2026-02-09T12:58:35.704Z" }, + { url = "https://files.pythonhosted.org/packages/2f/6f/3284d4203fd2f28edd73034968398cd2d4cb04ab192abc8cff007ea35679/coverage-7.13.4-cp314-cp314t-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:caa421e2684e382c5d8973ac55e4f36bed6821a9bad5c953494de960c74595c9", size = 261877, upload-time = "2026-02-09T12:58:37.864Z" }, + { url = "https://files.pythonhosted.org/packages/09/aa/b672a647bbe1556a85337dc95bfd40d146e9965ead9cc2fe81bde1e5cbce/coverage-7.13.4-cp314-cp314t-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:14375934243ee05f56c45393fe2ce81fe5cc503c07cee2bdf1725fb8bef3ffaf", size = 264004, upload-time = "2026-02-09T12:58:39.492Z" }, + { url = "https://files.pythonhosted.org/packages/79/a1/aa384dbe9181f98bba87dd23dda436f0c6cf2e148aecbb4e50fc51c1a656/coverage-7.13.4-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:25a41c3104d08edb094d9db0d905ca54d0cd41c928bb6be3c4c799a54753af55", size = 266408, upload-time = "2026-02-09T12:58:41.852Z" }, + { url = "https://files.pythonhosted.org/packages/53/5e/5150bf17b4019bc600799f376bb9606941e55bd5a775dc1e096b6ffea952/coverage-7.13.4-cp314-cp314t-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:6f01afcff62bf9a08fb32b2c1d6e924236c0383c02c790732b6537269e466a72", size = 267544, upload-time = "2026-02-09T12:58:44.093Z" }, + { url = "https://files.pythonhosted.org/packages/e0/ed/f1de5c675987a4a7a672250d2c5c9d73d289dbf13410f00ed7181d8017dd/coverage-7.13.4-cp314-cp314t-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:eb9078108fbf0bcdde37c3f4779303673c2fa1fe8f7956e68d447d0dd426d38a", size = 260980, upload-time = "2026-02-09T12:58:45.721Z" }, + { url = "https://files.pythonhosted.org/packages/b3/e3/fe758d01850aa172419a6743fe76ba8b92c29d181d4f676ffe2dae2ba631/coverage-7.13.4-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:0e086334e8537ddd17e5f16a344777c1ab8194986ec533711cbe6c41cde841b6", size = 263871, upload-time = "2026-02-09T12:58:47.334Z" }, + { url = "https://files.pythonhosted.org/packages/b6/76/b829869d464115e22499541def9796b25312b8cf235d3bb00b39f1675395/coverage-7.13.4-cp314-cp314t-musllinux_1_2_i686.whl", hash = "sha256:725d985c5ab621268b2edb8e50dfe57633dc69bda071abc470fed55a14935fd3", size = 261472, upload-time = "2026-02-09T12:58:48.995Z" }, + { url = "https://files.pythonhosted.org/packages/14/9e/caedb1679e73e2f6ad240173f55218488bfe043e38da577c4ec977489915/coverage-7.13.4-cp314-cp314t-musllinux_1_2_ppc64le.whl", hash = "sha256:3c06f0f1337c667b971ca2f975523347e63ec5e500b9aa5882d91931cd3ef750", size = 265210, upload-time = "2026-02-09T12:58:51.178Z" }, + { url = "https://files.pythonhosted.org/packages/3a/10/0dd02cb009b16ede425b49ec344aba13a6ae1dc39600840ea6abcb085ac4/coverage-7.13.4-cp314-cp314t-musllinux_1_2_riscv64.whl", hash = "sha256:590c0ed4bf8e85f745e6b805b2e1c457b2e33d5255dd9729743165253bc9ad39", size = 260319, upload-time = "2026-02-09T12:58:53.081Z" }, + { url = "https://files.pythonhosted.org/packages/92/8e/234d2c927af27c6d7a5ffad5bd2cf31634c46a477b4c7adfbfa66baf7ebb/coverage-7.13.4-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:eb30bf180de3f632cd043322dad5751390e5385108b2807368997d1a92a509d0", size = 262638, upload-time = "2026-02-09T12:58:55.258Z" }, + { url = "https://files.pythonhosted.org/packages/2f/64/e5547c8ff6964e5965c35a480855911b61509cce544f4d442caa759a0702/coverage-7.13.4-cp314-cp314t-win32.whl", hash = "sha256:c4240e7eded42d131a2d2c4dec70374b781b043ddc79a9de4d55ca71f8e98aea", size = 223040, upload-time = "2026-02-09T12:58:56.936Z" }, + { url = "https://files.pythonhosted.org/packages/c7/96/38086d58a181aac86d503dfa9c47eb20715a79c3e3acbdf786e92e5c09a8/coverage-7.13.4-cp314-cp314t-win_amd64.whl", hash = "sha256:4c7d3cc01e7350f2f0f6f7036caaf5673fb56b6998889ccfe9e1c1fe75a9c932", size = 224148, upload-time = "2026-02-09T12:58:58.645Z" }, + { url = "https://files.pythonhosted.org/packages/ce/72/8d10abd3740a0beb98c305e0c3faf454366221c0f37a8bcf8f60020bb65a/coverage-7.13.4-cp314-cp314t-win_arm64.whl", hash = "sha256:23e3f687cf945070d1c90f85db66d11e3025665d8dafa831301a0e0038f3db9b", size = 222172, upload-time = "2026-02-09T12:59:00.396Z" }, + { url = "https://files.pythonhosted.org/packages/0d/4a/331fe2caf6799d591109bb9c08083080f6de90a823695d412a935622abb2/coverage-7.13.4-py3-none-any.whl", hash = "sha256:1af1641e57cf7ba1bd67d677c9abdbcd6cc2ab7da3bca7fa1e2b7e50e65f2ad0", size = 211242, upload-time = "2026-02-09T12:59:02.032Z" }, +] + +[package.optional-dependencies] +toml = [ + { name = "tomli", marker = "python_full_version <= '3.11'" }, +] + +[[package]] +name = "docker" +version = "7.1.0" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "pywin32", marker = "sys_platform == 'win32'" }, + { name = "requests" }, + { name = "urllib3" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/91/9b/4a2ea29aeba62471211598dac5d96825bb49348fa07e906ea930394a83ce/docker-7.1.0.tar.gz", hash = "sha256:ad8c70e6e3f8926cb8a92619b832b4ea5299e2831c14284663184e200546fa6c", size = 117834, upload-time = "2024-05-23T11:13:57.216Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/e3/26/57c6fb270950d476074c087527a558ccb6f4436657314bfb6cdf484114c4/docker-7.1.0-py3-none-any.whl", hash = "sha256:c96b93b7f0a746f9e77d325bcfb87422a3d8bd4f03136ae8a85b37f1898d5fc0", size = 147774, upload-time = "2024-05-23T11:13:55.01Z" }, +] + +[[package]] +name = "durationpy" +version = "0.10" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/9d/a4/e44218c2b394e31a6dd0d6b095c4e1f32d0be54c2a4b250032d717647bab/durationpy-0.10.tar.gz", hash = "sha256:1fa6893409a6e739c9c72334fc65cca1f355dbdd93405d30f726deb5bde42fba", size = 3335, upload-time = "2025-05-17T13:52:37.26Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/b0/0d/9feae160378a3553fa9a339b0e9c1a048e147a4127210e286ef18b730f03/durationpy-0.10-py3-none-any.whl", hash = "sha256:3b41e1b601234296b4fb368338fdcd3e13e0b4fb5b67345948f4f2bf9868b286", size = 3922, upload-time = "2025-05-17T13:52:36.463Z" }, +] + +[[package]] +name = "edx-codejail" +version = "4.1.0" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "six" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/94/4b/d94b6f4c3b8ac1ddb9a6badd4bd03d8337265ef207406dffc91f671db695/edx_codejail-4.1.0.tar.gz", hash = "sha256:fdccde57a2dc8c81ebf80c4f9d317cbf1ae2f68c7f598c2f17289b85b7c0ccdb", size = 29888, upload-time = "2025-11-07T15:30:16.156Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/8f/82/8077069f7161d70257abd40ff9eddcbf3c1e4fbe66e23c933bf9a3c6ccb0/edx_codejail-4.1.0-py3-none-any.whl", hash = "sha256:0b2779131136117929bb8e58c302062dc3e98ee6526f3eb6936b1738daa7a14c", size = 25463, upload-time = "2025-11-07T15:30:14.747Z" }, +] + +[[package]] +name = "googleapis-common-protos" +version = "1.73.0" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "protobuf" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/99/96/a0205167fa0154f4a542fd6925bdc63d039d88dab3588b875078107e6f06/googleapis_common_protos-1.73.0.tar.gz", hash = "sha256:778d07cd4fbeff84c6f7c72102f0daf98fa2bfd3fa8bea426edc545588da0b5a", size = 147323, upload-time = "2026-03-06T21:53:09.727Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/69/28/23eea8acd65972bbfe295ce3666b28ac510dfcb115fac089d3edb0feb00a/googleapis_common_protos-1.73.0-py3-none-any.whl", hash = "sha256:dfdaaa2e860f242046be561e6d6cb5c5f1541ae02cfbcb034371aadb2942b4e8", size = 297578, upload-time = "2026-03-06T21:52:33.933Z" }, +] + +[[package]] +name = "idna" +version = "3.11" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/6f/6d/0703ccc57f3a7233505399edb88de3cbd678da106337b9fcde432b65ed60/idna-3.11.tar.gz", hash = "sha256:795dafcc9c04ed0c1fb032c2aa73654d8e8c5023a7df64a53f39190ada629902", size = 194582, upload-time = "2025-10-12T14:55:20.501Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/0e/61/66938bbb5fc52dbdf84594873d5b51fb1f7c7794e9c0f5bd885f30bc507b/idna-3.11-py3-none-any.whl", hash = "sha256:771a87f49d9defaf64091e6e6fe9c18d4833f140bd19464795bc32d966ca37ea", size = 71008, upload-time = "2025-10-12T14:55:18.883Z" }, +] + +[[package]] +name = "importlib-metadata" +version = "8.7.1" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "zipp" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/f3/49/3b30cad09e7771a4982d9975a8cbf64f00d4a1ececb53297f1d9a7be1b10/importlib_metadata-8.7.1.tar.gz", hash = "sha256:49fef1ae6440c182052f407c8d34a68f72efc36db9ca90dc0113398f2fdde8bb", size = 57107, upload-time = "2025-12-21T10:00:19.278Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/fa/5e/f8e9a1d23b9c20a551a8a02ea3637b4642e22c2626e3a13a9a29cdea99eb/importlib_metadata-8.7.1-py3-none-any.whl", hash = "sha256:5a1f80bf1daa489495071efbb095d75a634cf28a8bc299581244063b53176151", size = 27865, upload-time = "2025-12-21T10:00:18.329Z" }, +] + +[[package]] +name = "iniconfig" +version = "2.3.0" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/72/34/14ca021ce8e5dfedc35312d08ba8bf51fdd999c576889fc2c24cb97f4f10/iniconfig-2.3.0.tar.gz", hash = "sha256:c76315c77db068650d49c5b56314774a7804df16fee4402c1f19d6d15d8c4730", size = 20503, upload-time = "2025-10-18T21:55:43.219Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/cb/b1/3846dd7f199d53cb17f49cba7e651e9ce294d8497c8c150530ed11865bb8/iniconfig-2.3.0-py3-none-any.whl", hash = "sha256:f631c04d2c48c52b84d0d0549c99ff3859c98df65b3101406327ecc7d53fbf12", size = 7484, upload-time = "2025-10-18T21:55:41.639Z" }, +] + +[[package]] +name = "kubernetes" +version = "35.0.0" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "certifi" }, + { name = "durationpy" }, + { name = "python-dateutil" }, + { name = "pyyaml" }, + { name = "requests" }, + { name = "requests-oauthlib" }, + { name = "six" }, + { name = "urllib3" }, + { name = "websocket-client" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/2c/8f/85bf51ad4150f64e8c665daf0d9dfe9787ae92005efb9a4d1cba592bd79d/kubernetes-35.0.0.tar.gz", hash = "sha256:3d00d344944239821458b9efd484d6df9f011da367ecb155dadf9513f05f09ee", size = 1094642, upload-time = "2026-01-16T01:05:27.76Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/0c/70/05b685ea2dffcb2adbf3cdcea5d8865b7bc66f67249084cf845012a0ff13/kubernetes-35.0.0-py2.py3-none-any.whl", hash = "sha256:39e2b33b46e5834ef6c3985ebfe2047ab39135d41de51ce7641a7ca5b372a13d", size = 2017602, upload-time = "2026-01-16T01:05:25.991Z" }, +] + +[[package]] +name = "mock" +version = "5.2.0" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/07/8c/14c2ae915e5f9dca5a22edd68b35be94400719ccfa068a03e0fb63d0f6f6/mock-5.2.0.tar.gz", hash = "sha256:4e460e818629b4b173f32d08bf30d3af8123afbb8e04bb5707a1fd4799e503f0", size = 92796, upload-time = "2025-03-03T12:31:42.911Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/bd/d9/617e6af809bf3a1d468e0d58c3997b1dc219a9a9202e650d30c2fc85d481/mock-5.2.0-py3-none-any.whl", hash = "sha256:7ba87f72ca0e915175596069dbbcc7c75af7b5e9b9bc107ad6349ede0819982f", size = 31617, upload-time = "2025-03-03T12:31:41.518Z" }, +] + +[[package]] +name = "oauthlib" +version = "3.3.1" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/0b/5f/19930f824ffeb0ad4372da4812c50edbd1434f678c90c2733e1188edfc63/oauthlib-3.3.1.tar.gz", hash = "sha256:0f0f8aa759826a193cf66c12ea1af1637f87b9b4622d46e866952bb022e538c9", size = 185918, upload-time = "2025-06-19T22:48:08.269Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/be/9c/92789c596b8df838baa98fa71844d84283302f7604ed565dafe5a6b5041a/oauthlib-3.3.1-py3-none-any.whl", hash = "sha256:88119c938d2b8fb88561af5f6ee0eec8cc8d552b7bb1f712743136eb7523b7a1", size = 160065, upload-time = "2025-06-19T22:48:06.508Z" }, +] + +[[package]] +name = "opentelemetry-api" +version = "1.40.0" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "importlib-metadata" }, + { name = "typing-extensions" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/2c/1d/4049a9e8698361cc1a1aa03a6c59e4fa4c71e0c0f94a30f988a6876a2ae6/opentelemetry_api-1.40.0.tar.gz", hash = "sha256:159be641c0b04d11e9ecd576906462773eb97ae1b657730f0ecf64d32071569f", size = 70851, upload-time = "2026-03-04T14:17:21.555Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/5f/bf/93795954016c522008da367da292adceed71cca6ee1717e1d64c83089099/opentelemetry_api-1.40.0-py3-none-any.whl", hash = "sha256:82dd69331ae74b06f6a874704be0cfaa49a1650e1537d4a813b86ecef7d0ecf9", size = 68676, upload-time = "2026-03-04T14:17:01.24Z" }, +] + +[[package]] +name = "opentelemetry-exporter-otlp-proto-common" +version = "1.40.0" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "opentelemetry-proto" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/51/bc/1559d46557fe6eca0b46c88d4c2676285f1f3be2e8d06bb5d15fbffc814a/opentelemetry_exporter_otlp_proto_common-1.40.0.tar.gz", hash = "sha256:1cbee86a4064790b362a86601ee7934f368b81cd4cc2f2e163902a6e7818a0fa", size = 20416, upload-time = "2026-03-04T14:17:23.801Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/8b/ca/8f122055c97a932311a3f640273f084e738008933503d0c2563cd5d591fc/opentelemetry_exporter_otlp_proto_common-1.40.0-py3-none-any.whl", hash = "sha256:7081ff453835a82417bf38dccf122c827c3cbc94f2079b03bba02a3165f25149", size = 18369, upload-time = "2026-03-04T14:17:04.796Z" }, +] + +[[package]] +name = "opentelemetry-exporter-otlp-proto-http" +version = "1.40.0" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "googleapis-common-protos" }, + { name = "opentelemetry-api" }, + { name = "opentelemetry-exporter-otlp-proto-common" }, + { name = "opentelemetry-proto" }, + { name = "opentelemetry-sdk" }, + { name = "requests" }, + { name = "typing-extensions" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/2e/fa/73d50e2c15c56be4d000c98e24221d494674b0cc95524e2a8cb3856d95a4/opentelemetry_exporter_otlp_proto_http-1.40.0.tar.gz", hash = "sha256:db48f5e0f33217588bbc00274a31517ba830da576e59503507c839b38fa0869c", size = 17772, upload-time = "2026-03-04T14:17:25.324Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/a0/3a/8865d6754e61c9fb170cdd530a124a53769ee5f740236064816eb0ca7301/opentelemetry_exporter_otlp_proto_http-1.40.0-py3-none-any.whl", hash = "sha256:a8d1dab28f504c5d96577d6509f80a8150e44e8f45f82cdbe0e34c99ab040069", size = 19960, upload-time = "2026-03-04T14:17:07.153Z" }, +] + +[[package]] +name = "opentelemetry-proto" +version = "1.40.0" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "protobuf" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/4c/77/dd38991db037fdfce45849491cb61de5ab000f49824a00230afb112a4392/opentelemetry_proto-1.40.0.tar.gz", hash = "sha256:03f639ca129ba513f5819810f5b1f42bcb371391405d99c168fe6937c62febcd", size = 45667, upload-time = "2026-03-04T14:17:31.194Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/b9/b2/189b2577dde745b15625b3214302605b1353436219d42b7912e77fa8dc24/opentelemetry_proto-1.40.0-py3-none-any.whl", hash = "sha256:266c4385d88923a23d63e353e9761af0f47a6ed0d486979777fe4de59dc9b25f", size = 72073, upload-time = "2026-03-04T14:17:16.673Z" }, +] + +[[package]] +name = "opentelemetry-sdk" +version = "1.40.0" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "opentelemetry-api" }, + { name = "opentelemetry-semantic-conventions" }, + { name = "typing-extensions" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/58/fd/3c3125b20ba18ce2155ba9ea74acb0ae5d25f8cd39cfd37455601b7955cc/opentelemetry_sdk-1.40.0.tar.gz", hash = "sha256:18e9f5ec20d859d268c7cb3c5198c8d105d073714db3de50b593b8c1345a48f2", size = 184252, upload-time = "2026-03-04T14:17:31.87Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/2c/c5/6a852903d8bfac758c6dc6e9a68b015d3c33f2f1be5e9591e0f4b69c7e0a/opentelemetry_sdk-1.40.0-py3-none-any.whl", hash = "sha256:787d2154a71f4b3d81f20524a8ce061b7db667d24e46753f32a7bc48f1c1f3f1", size = 141951, upload-time = "2026-03-04T14:17:17.961Z" }, +] + +[[package]] +name = "opentelemetry-semantic-conventions" +version = "0.61b0" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "opentelemetry-api" }, + { name = "typing-extensions" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/6d/c0/4ae7973f3c2cfd2b6e321f1675626f0dab0a97027cc7a297474c9c8f3d04/opentelemetry_semantic_conventions-0.61b0.tar.gz", hash = "sha256:072f65473c5d7c6dc0355b27d6c9d1a679d63b6d4b4b16a9773062cb7e31192a", size = 145755, upload-time = "2026-03-04T14:17:32.664Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/b2/37/cc6a55e448deaa9b27377d087da8615a3416d8ad523d5960b78dbeadd02a/opentelemetry_semantic_conventions-0.61b0-py3-none-any.whl", hash = "sha256:fa530a96be229795f8cef353739b618148b0fe2b4b3f005e60e262926c4d38e2", size = 231621, upload-time = "2026-03-04T14:17:19.33Z" }, +] + +[[package]] +name = "packaging" +version = "26.0" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/65/ee/299d360cdc32edc7d2cf530f3accf79c4fca01e96ffc950d8a52213bd8e4/packaging-26.0.tar.gz", hash = "sha256:00243ae351a257117b6a241061796684b084ed1c516a08c48a3f7e147a9d80b4", size = 143416, upload-time = "2026-01-21T20:50:39.064Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/b7/b9/c538f279a4e237a006a2c98387d081e9eb060d203d8ed34467cc0f0b9b53/packaging-26.0-py3-none-any.whl", hash = "sha256:b36f1fef9334a5588b4166f8bcd26a14e521f2b55e6b9de3aaa80d3ff7a37529", size = 74366, upload-time = "2026-01-21T20:50:37.788Z" }, +] + +[[package]] +name = "pluggy" +version = "1.6.0" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/f9/e2/3e91f31a7d2b083fe6ef3fa267035b518369d9511ffab804f839851d2779/pluggy-1.6.0.tar.gz", hash = "sha256:7dcc130b76258d33b90f61b658791dede3486c3e6bfb003ee5c9bfb396dd22f3", size = 69412, upload-time = "2025-05-15T12:30:07.975Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/54/20/4d324d65cc6d9205fabedc306948156824eb9f0ee1633355a8f7ec5c66bf/pluggy-1.6.0-py3-none-any.whl", hash = "sha256:e920276dd6813095e9377c0bc5566d94c932c33b27a3e3945d8389c374dd4746", size = 20538, upload-time = "2025-05-15T12:30:06.134Z" }, +] + +[[package]] +name = "protobuf" +version = "6.33.5" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/ba/25/7c72c307aafc96fa87062aa6291d9f7c94836e43214d43722e86037aac02/protobuf-6.33.5.tar.gz", hash = "sha256:6ddcac2a081f8b7b9642c09406bc6a4290128fce5f471cddd165960bb9119e5c", size = 444465, upload-time = "2026-01-29T21:51:33.494Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/b1/79/af92d0a8369732b027e6d6084251dd8e782c685c72da161bd4a2e00fbabb/protobuf-6.33.5-cp310-abi3-win32.whl", hash = "sha256:d71b040839446bac0f4d162e758bea99c8251161dae9d0983a3b88dee345153b", size = 425769, upload-time = "2026-01-29T21:51:21.751Z" }, + { url = "https://files.pythonhosted.org/packages/55/75/bb9bc917d10e9ee13dee8607eb9ab963b7cf8be607c46e7862c748aa2af7/protobuf-6.33.5-cp310-abi3-win_amd64.whl", hash = "sha256:3093804752167bcab3998bec9f1048baae6e29505adaf1afd14a37bddede533c", size = 437118, upload-time = "2026-01-29T21:51:24.022Z" }, + { url = "https://files.pythonhosted.org/packages/a2/6b/e48dfc1191bc5b52950246275bf4089773e91cb5ba3592621723cdddca62/protobuf-6.33.5-cp39-abi3-macosx_10_9_universal2.whl", hash = "sha256:a5cb85982d95d906df1e2210e58f8e4f1e3cdc088e52c921a041f9c9a0386de5", size = 427766, upload-time = "2026-01-29T21:51:25.413Z" }, + { url = "https://files.pythonhosted.org/packages/4e/b1/c79468184310de09d75095ed1314b839eb2f72df71097db9d1404a1b2717/protobuf-6.33.5-cp39-abi3-manylinux2014_aarch64.whl", hash = "sha256:9b71e0281f36f179d00cbcb119cb19dec4d14a81393e5ea220f64b286173e190", size = 324638, upload-time = "2026-01-29T21:51:26.423Z" }, + { url = "https://files.pythonhosted.org/packages/c5/f5/65d838092fd01c44d16037953fd4c2cc851e783de9b8f02b27ec4ffd906f/protobuf-6.33.5-cp39-abi3-manylinux2014_s390x.whl", hash = "sha256:8afa18e1d6d20af15b417e728e9f60f3aa108ee76f23c3b2c07a2c3b546d3afd", size = 339411, upload-time = "2026-01-29T21:51:27.446Z" }, + { url = "https://files.pythonhosted.org/packages/9b/53/a9443aa3ca9ba8724fdfa02dd1887c1bcd8e89556b715cfbacca6b63dbec/protobuf-6.33.5-cp39-abi3-manylinux2014_x86_64.whl", hash = "sha256:cbf16ba3350fb7b889fca858fb215967792dc125b35c7976ca4818bee3521cf0", size = 323465, upload-time = "2026-01-29T21:51:28.925Z" }, + { url = "https://files.pythonhosted.org/packages/57/bf/2086963c69bdac3d7cff1cc7ff79b8ce5ea0bec6797a017e1be338a46248/protobuf-6.33.5-py3-none-any.whl", hash = "sha256:69915a973dd0f60f31a08b8318b73eab2bd6a392c79184b3612226b0a3f8ec02", size = 170687, upload-time = "2026-01-29T21:51:32.557Z" }, +] + +[[package]] +name = "pygments" +version = "2.19.2" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/b0/77/a5b8c569bf593b0140bde72ea885a803b82086995367bf2037de0159d924/pygments-2.19.2.tar.gz", hash = "sha256:636cb2477cec7f8952536970bc533bc43743542f70392ae026374600add5b887", size = 4968631, upload-time = "2025-06-21T13:39:12.283Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/c7/21/705964c7812476f378728bdf590ca4b771ec72385c533964653c68e86bdc/pygments-2.19.2-py3-none-any.whl", hash = "sha256:86540386c03d588bb81d44bc3928634ff26449851e99741617ecb9037ee5ec0b", size = 1225217, upload-time = "2025-06-21T13:39:07.939Z" }, +] + +[[package]] +name = "pytest" +version = "9.0.2" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "colorama", marker = "sys_platform == 'win32'" }, + { name = "iniconfig" }, + { name = "packaging" }, + { name = "pluggy" }, + { name = "pygments" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/d1/db/7ef3487e0fb0049ddb5ce41d3a49c235bf9ad299b6a25d5780a89f19230f/pytest-9.0.2.tar.gz", hash = "sha256:75186651a92bd89611d1d9fc20f0b4345fd827c41ccd5c299a868a05d70edf11", size = 1568901, upload-time = "2025-12-06T21:30:51.014Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/3b/ab/b3226f0bd7cdcf710fbede2b3548584366da3b19b5021e74f5bde2a8fa3f/pytest-9.0.2-py3-none-any.whl", hash = "sha256:711ffd45bf766d5264d487b917733b453d917afd2b0ad65223959f59089f875b", size = 374801, upload-time = "2025-12-06T21:30:49.154Z" }, +] + +[[package]] +name = "pytest-cov" +version = "7.0.0" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "coverage", extra = ["toml"] }, + { name = "pluggy" }, + { name = "pytest" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/5e/f7/c933acc76f5208b3b00089573cf6a2bc26dc80a8aece8f52bb7d6b1855ca/pytest_cov-7.0.0.tar.gz", hash = "sha256:33c97eda2e049a0c5298e91f519302a1334c26ac65c1a483d6206fd458361af1", size = 54328, upload-time = "2025-09-09T10:57:02.113Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/ee/49/1377b49de7d0c1ce41292161ea0f721913fa8722c19fb9c1e3aa0367eecb/pytest_cov-7.0.0-py3-none-any.whl", hash = "sha256:3b8e9558b16cc1479da72058bdecf8073661c7f57f7d3c5f22a1c23507f2d861", size = 22424, upload-time = "2025-09-09T10:57:00.695Z" }, +] + +[[package]] +name = "python-dateutil" +version = "2.9.0.post0" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "six" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/66/c0/0c8b6ad9f17a802ee498c46e004a0eb49bc148f2fd230864601a86dcf6db/python-dateutil-2.9.0.post0.tar.gz", hash = "sha256:37dd54208da7e1cd875388217d5e00ebd4179249f90fb72437e91a35459a0ad3", size = 342432, upload-time = "2024-03-01T18:36:20.211Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/ec/57/56b9bcc3c9c6a792fcbaf139543cee77261f3651ca9da0c93f5c1221264b/python_dateutil-2.9.0.post0-py2.py3-none-any.whl", hash = "sha256:a8b2bc7bffae282281c8140a97d3aa9c14da0b136dfe83f850eea9a5f7470427", size = 229892, upload-time = "2024-03-01T18:36:18.57Z" }, +] + +[[package]] +name = "pywin32" +version = "311" +source = { registry = "https://pypi.org/simple" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/7c/af/449a6a91e5d6db51420875c54f6aff7c97a86a3b13a0b4f1a5c13b988de3/pywin32-311-cp311-cp311-win32.whl", hash = "sha256:184eb5e436dea364dcd3d2316d577d625c0351bf237c4e9a5fabbcfa5a58b151", size = 8697031, upload-time = "2025-07-14T20:13:13.266Z" }, + { url = "https://files.pythonhosted.org/packages/51/8f/9bb81dd5bb77d22243d33c8397f09377056d5c687aa6d4042bea7fbf8364/pywin32-311-cp311-cp311-win_amd64.whl", hash = "sha256:3ce80b34b22b17ccbd937a6e78e7225d80c52f5ab9940fe0506a1a16f3dab503", size = 9508308, upload-time = "2025-07-14T20:13:15.147Z" }, + { url = "https://files.pythonhosted.org/packages/44/7b/9c2ab54f74a138c491aba1b1cd0795ba61f144c711daea84a88b63dc0f6c/pywin32-311-cp311-cp311-win_arm64.whl", hash = "sha256:a733f1388e1a842abb67ffa8e7aad0e70ac519e09b0f6a784e65a136ec7cefd2", size = 8703930, upload-time = "2025-07-14T20:13:16.945Z" }, + { url = "https://files.pythonhosted.org/packages/e7/ab/01ea1943d4eba0f850c3c61e78e8dd59757ff815ff3ccd0a84de5f541f42/pywin32-311-cp312-cp312-win32.whl", hash = "sha256:750ec6e621af2b948540032557b10a2d43b0cee2ae9758c54154d711cc852d31", size = 8706543, upload-time = "2025-07-14T20:13:20.765Z" }, + { url = "https://files.pythonhosted.org/packages/d1/a8/a0e8d07d4d051ec7502cd58b291ec98dcc0c3fff027caad0470b72cfcc2f/pywin32-311-cp312-cp312-win_amd64.whl", hash = "sha256:b8c095edad5c211ff31c05223658e71bf7116daa0ecf3ad85f3201ea3190d067", size = 9495040, upload-time = "2025-07-14T20:13:22.543Z" }, + { url = "https://files.pythonhosted.org/packages/ba/3a/2ae996277b4b50f17d61f0603efd8253cb2d79cc7ae159468007b586396d/pywin32-311-cp312-cp312-win_arm64.whl", hash = "sha256:e286f46a9a39c4a18b319c28f59b61de793654af2f395c102b4f819e584b5852", size = 8710102, upload-time = "2025-07-14T20:13:24.682Z" }, + { url = "https://files.pythonhosted.org/packages/a5/be/3fd5de0979fcb3994bfee0d65ed8ca9506a8a1260651b86174f6a86f52b3/pywin32-311-cp313-cp313-win32.whl", hash = "sha256:f95ba5a847cba10dd8c4d8fefa9f2a6cf283b8b88ed6178fa8a6c1ab16054d0d", size = 8705700, upload-time = "2025-07-14T20:13:26.471Z" }, + { url = "https://files.pythonhosted.org/packages/e3/28/e0a1909523c6890208295a29e05c2adb2126364e289826c0a8bc7297bd5c/pywin32-311-cp313-cp313-win_amd64.whl", hash = "sha256:718a38f7e5b058e76aee1c56ddd06908116d35147e133427e59a3983f703a20d", size = 9494700, upload-time = "2025-07-14T20:13:28.243Z" }, + { url = "https://files.pythonhosted.org/packages/04/bf/90339ac0f55726dce7d794e6d79a18a91265bdf3aa70b6b9ca52f35e022a/pywin32-311-cp313-cp313-win_arm64.whl", hash = "sha256:7b4075d959648406202d92a2310cb990fea19b535c7f4a78d3f5e10b926eeb8a", size = 8709318, upload-time = "2025-07-14T20:13:30.348Z" }, + { url = "https://files.pythonhosted.org/packages/c9/31/097f2e132c4f16d99a22bfb777e0fd88bd8e1c634304e102f313af69ace5/pywin32-311-cp314-cp314-win32.whl", hash = "sha256:b7a2c10b93f8986666d0c803ee19b5990885872a7de910fc460f9b0c2fbf92ee", size = 8840714, upload-time = "2025-07-14T20:13:32.449Z" }, + { url = "https://files.pythonhosted.org/packages/90/4b/07c77d8ba0e01349358082713400435347df8426208171ce297da32c313d/pywin32-311-cp314-cp314-win_amd64.whl", hash = "sha256:3aca44c046bd2ed8c90de9cb8427f581c479e594e99b5c0bb19b29c10fd6cb87", size = 9656800, upload-time = "2025-07-14T20:13:34.312Z" }, + { url = "https://files.pythonhosted.org/packages/c0/d2/21af5c535501a7233e734b8af901574572da66fcc254cb35d0609c9080dd/pywin32-311-cp314-cp314-win_arm64.whl", hash = "sha256:a508e2d9025764a8270f93111a970e1d0fbfc33f4153b388bb649b7eec4f9b42", size = 8932540, upload-time = "2025-07-14T20:13:36.379Z" }, +] + +[[package]] +name = "pyyaml" +version = "6.0.3" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/05/8e/961c0007c59b8dd7729d542c61a4d537767a59645b82a0b521206e1e25c2/pyyaml-6.0.3.tar.gz", hash = "sha256:d76623373421df22fb4cf8817020cbb7ef15c725b9d5e45f17e189bfc384190f", size = 130960, upload-time = "2025-09-25T21:33:16.546Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/6d/16/a95b6757765b7b031c9374925bb718d55e0a9ba8a1b6a12d25962ea44347/pyyaml-6.0.3-cp311-cp311-macosx_10_13_x86_64.whl", hash = "sha256:44edc647873928551a01e7a563d7452ccdebee747728c1080d881d68af7b997e", size = 185826, upload-time = "2025-09-25T21:31:58.655Z" }, + { url = "https://files.pythonhosted.org/packages/16/19/13de8e4377ed53079ee996e1ab0a9c33ec2faf808a4647b7b4c0d46dd239/pyyaml-6.0.3-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:652cb6edd41e718550aad172851962662ff2681490a8a711af6a4d288dd96824", size = 175577, upload-time = "2025-09-25T21:32:00.088Z" }, + { url = "https://files.pythonhosted.org/packages/0c/62/d2eb46264d4b157dae1275b573017abec435397aa59cbcdab6fc978a8af4/pyyaml-6.0.3-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:10892704fc220243f5305762e276552a0395f7beb4dbf9b14ec8fd43b57f126c", size = 775556, upload-time = "2025-09-25T21:32:01.31Z" }, + { url = "https://files.pythonhosted.org/packages/10/cb/16c3f2cf3266edd25aaa00d6c4350381c8b012ed6f5276675b9eba8d9ff4/pyyaml-6.0.3-cp311-cp311-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:850774a7879607d3a6f50d36d04f00ee69e7fc816450e5f7e58d7f17f1ae5c00", size = 882114, upload-time = "2025-09-25T21:32:03.376Z" }, + { url = "https://files.pythonhosted.org/packages/71/60/917329f640924b18ff085ab889a11c763e0b573da888e8404ff486657602/pyyaml-6.0.3-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:b8bb0864c5a28024fac8a632c443c87c5aa6f215c0b126c449ae1a150412f31d", size = 806638, upload-time = "2025-09-25T21:32:04.553Z" }, + { url = "https://files.pythonhosted.org/packages/dd/6f/529b0f316a9fd167281a6c3826b5583e6192dba792dd55e3203d3f8e655a/pyyaml-6.0.3-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:1d37d57ad971609cf3c53ba6a7e365e40660e3be0e5175fa9f2365a379d6095a", size = 767463, upload-time = "2025-09-25T21:32:06.152Z" }, + { url = "https://files.pythonhosted.org/packages/f2/6a/b627b4e0c1dd03718543519ffb2f1deea4a1e6d42fbab8021936a4d22589/pyyaml-6.0.3-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:37503bfbfc9d2c40b344d06b2199cf0e96e97957ab1c1b546fd4f87e53e5d3e4", size = 794986, upload-time = "2025-09-25T21:32:07.367Z" }, + { url = "https://files.pythonhosted.org/packages/45/91/47a6e1c42d9ee337c4839208f30d9f09caa9f720ec7582917b264defc875/pyyaml-6.0.3-cp311-cp311-win32.whl", hash = "sha256:8098f252adfa6c80ab48096053f512f2321f0b998f98150cea9bd23d83e1467b", size = 142543, upload-time = "2025-09-25T21:32:08.95Z" }, + { url = "https://files.pythonhosted.org/packages/da/e3/ea007450a105ae919a72393cb06f122f288ef60bba2dc64b26e2646fa315/pyyaml-6.0.3-cp311-cp311-win_amd64.whl", hash = "sha256:9f3bfb4965eb874431221a3ff3fdcddc7e74e3b07799e0e84ca4a0f867d449bf", size = 158763, upload-time = "2025-09-25T21:32:09.96Z" }, + { url = "https://files.pythonhosted.org/packages/d1/33/422b98d2195232ca1826284a76852ad5a86fe23e31b009c9886b2d0fb8b2/pyyaml-6.0.3-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:7f047e29dcae44602496db43be01ad42fc6f1cc0d8cd6c83d342306c32270196", size = 182063, upload-time = "2025-09-25T21:32:11.445Z" }, + { url = "https://files.pythonhosted.org/packages/89/a0/6cf41a19a1f2f3feab0e9c0b74134aa2ce6849093d5517a0c550fe37a648/pyyaml-6.0.3-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:fc09d0aa354569bc501d4e787133afc08552722d3ab34836a80547331bb5d4a0", size = 173973, upload-time = "2025-09-25T21:32:12.492Z" }, + { url = "https://files.pythonhosted.org/packages/ed/23/7a778b6bd0b9a8039df8b1b1d80e2e2ad78aa04171592c8a5c43a56a6af4/pyyaml-6.0.3-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:9149cad251584d5fb4981be1ecde53a1ca46c891a79788c0df828d2f166bda28", size = 775116, upload-time = "2025-09-25T21:32:13.652Z" }, + { url = "https://files.pythonhosted.org/packages/65/30/d7353c338e12baef4ecc1b09e877c1970bd3382789c159b4f89d6a70dc09/pyyaml-6.0.3-cp312-cp312-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:5fdec68f91a0c6739b380c83b951e2c72ac0197ace422360e6d5a959d8d97b2c", size = 844011, upload-time = "2025-09-25T21:32:15.21Z" }, + { url = "https://files.pythonhosted.org/packages/8b/9d/b3589d3877982d4f2329302ef98a8026e7f4443c765c46cfecc8858c6b4b/pyyaml-6.0.3-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:ba1cc08a7ccde2d2ec775841541641e4548226580ab850948cbfda66a1befcdc", size = 807870, upload-time = "2025-09-25T21:32:16.431Z" }, + { url = "https://files.pythonhosted.org/packages/05/c0/b3be26a015601b822b97d9149ff8cb5ead58c66f981e04fedf4e762f4bd4/pyyaml-6.0.3-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:8dc52c23056b9ddd46818a57b78404882310fb473d63f17b07d5c40421e47f8e", size = 761089, upload-time = "2025-09-25T21:32:17.56Z" }, + { url = "https://files.pythonhosted.org/packages/be/8e/98435a21d1d4b46590d5459a22d88128103f8da4c2d4cb8f14f2a96504e1/pyyaml-6.0.3-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:41715c910c881bc081f1e8872880d3c650acf13dfa8214bad49ed4cede7c34ea", size = 790181, upload-time = "2025-09-25T21:32:18.834Z" }, + { url = "https://files.pythonhosted.org/packages/74/93/7baea19427dcfbe1e5a372d81473250b379f04b1bd3c4c5ff825e2327202/pyyaml-6.0.3-cp312-cp312-win32.whl", hash = "sha256:96b533f0e99f6579b3d4d4995707cf36df9100d67e0c8303a0c55b27b5f99bc5", size = 137658, upload-time = "2025-09-25T21:32:20.209Z" }, + { url = "https://files.pythonhosted.org/packages/86/bf/899e81e4cce32febab4fb42bb97dcdf66bc135272882d1987881a4b519e9/pyyaml-6.0.3-cp312-cp312-win_amd64.whl", hash = "sha256:5fcd34e47f6e0b794d17de1b4ff496c00986e1c83f7ab2fb8fcfe9616ff7477b", size = 154003, upload-time = "2025-09-25T21:32:21.167Z" }, + { url = "https://files.pythonhosted.org/packages/1a/08/67bd04656199bbb51dbed1439b7f27601dfb576fb864099c7ef0c3e55531/pyyaml-6.0.3-cp312-cp312-win_arm64.whl", hash = "sha256:64386e5e707d03a7e172c0701abfb7e10f0fb753ee1d773128192742712a98fd", size = 140344, upload-time = "2025-09-25T21:32:22.617Z" }, + { url = "https://files.pythonhosted.org/packages/d1/11/0fd08f8192109f7169db964b5707a2f1e8b745d4e239b784a5a1dd80d1db/pyyaml-6.0.3-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:8da9669d359f02c0b91ccc01cac4a67f16afec0dac22c2ad09f46bee0697eba8", size = 181669, upload-time = "2025-09-25T21:32:23.673Z" }, + { url = "https://files.pythonhosted.org/packages/b1/16/95309993f1d3748cd644e02e38b75d50cbc0d9561d21f390a76242ce073f/pyyaml-6.0.3-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:2283a07e2c21a2aa78d9c4442724ec1eb15f5e42a723b99cb3d822d48f5f7ad1", size = 173252, upload-time = "2025-09-25T21:32:25.149Z" }, + { url = "https://files.pythonhosted.org/packages/50/31/b20f376d3f810b9b2371e72ef5adb33879b25edb7a6d072cb7ca0c486398/pyyaml-6.0.3-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:ee2922902c45ae8ccada2c5b501ab86c36525b883eff4255313a253a3160861c", size = 767081, upload-time = "2025-09-25T21:32:26.575Z" }, + { url = "https://files.pythonhosted.org/packages/49/1e/a55ca81e949270d5d4432fbbd19dfea5321eda7c41a849d443dc92fd1ff7/pyyaml-6.0.3-cp313-cp313-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:a33284e20b78bd4a18c8c2282d549d10bc8408a2a7ff57653c0cf0b9be0afce5", size = 841159, upload-time = "2025-09-25T21:32:27.727Z" }, + { url = "https://files.pythonhosted.org/packages/74/27/e5b8f34d02d9995b80abcef563ea1f8b56d20134d8f4e5e81733b1feceb2/pyyaml-6.0.3-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:0f29edc409a6392443abf94b9cf89ce99889a1dd5376d94316ae5145dfedd5d6", size = 801626, upload-time = "2025-09-25T21:32:28.878Z" }, + { url = "https://files.pythonhosted.org/packages/f9/11/ba845c23988798f40e52ba45f34849aa8a1f2d4af4b798588010792ebad6/pyyaml-6.0.3-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:f7057c9a337546edc7973c0d3ba84ddcdf0daa14533c2065749c9075001090e6", size = 753613, upload-time = "2025-09-25T21:32:30.178Z" }, + { url = "https://files.pythonhosted.org/packages/3d/e0/7966e1a7bfc0a45bf0a7fb6b98ea03fc9b8d84fa7f2229e9659680b69ee3/pyyaml-6.0.3-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:eda16858a3cab07b80edaf74336ece1f986ba330fdb8ee0d6c0d68fe82bc96be", size = 794115, upload-time = "2025-09-25T21:32:31.353Z" }, + { url = "https://files.pythonhosted.org/packages/de/94/980b50a6531b3019e45ddeada0626d45fa85cbe22300844a7983285bed3b/pyyaml-6.0.3-cp313-cp313-win32.whl", hash = "sha256:d0eae10f8159e8fdad514efdc92d74fd8d682c933a6dd088030f3834bc8e6b26", size = 137427, upload-time = "2025-09-25T21:32:32.58Z" }, + { url = "https://files.pythonhosted.org/packages/97/c9/39d5b874e8b28845e4ec2202b5da735d0199dbe5b8fb85f91398814a9a46/pyyaml-6.0.3-cp313-cp313-win_amd64.whl", hash = "sha256:79005a0d97d5ddabfeeea4cf676af11e647e41d81c9a7722a193022accdb6b7c", size = 154090, upload-time = "2025-09-25T21:32:33.659Z" }, + { url = "https://files.pythonhosted.org/packages/73/e8/2bdf3ca2090f68bb3d75b44da7bbc71843b19c9f2b9cb9b0f4ab7a5a4329/pyyaml-6.0.3-cp313-cp313-win_arm64.whl", hash = "sha256:5498cd1645aa724a7c71c8f378eb29ebe23da2fc0d7a08071d89469bf1d2defb", size = 140246, upload-time = "2025-09-25T21:32:34.663Z" }, + { url = "https://files.pythonhosted.org/packages/9d/8c/f4bd7f6465179953d3ac9bc44ac1a8a3e6122cf8ada906b4f96c60172d43/pyyaml-6.0.3-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:8d1fab6bb153a416f9aeb4b8763bc0f22a5586065f86f7664fc23339fc1c1fac", size = 181814, upload-time = "2025-09-25T21:32:35.712Z" }, + { url = "https://files.pythonhosted.org/packages/bd/9c/4d95bb87eb2063d20db7b60faa3840c1b18025517ae857371c4dd55a6b3a/pyyaml-6.0.3-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:34d5fcd24b8445fadc33f9cf348c1047101756fd760b4dacb5c3e99755703310", size = 173809, upload-time = "2025-09-25T21:32:36.789Z" }, + { url = "https://files.pythonhosted.org/packages/92/b5/47e807c2623074914e29dabd16cbbdd4bf5e9b2db9f8090fa64411fc5382/pyyaml-6.0.3-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:501a031947e3a9025ed4405a168e6ef5ae3126c59f90ce0cd6f2bfc477be31b7", size = 766454, upload-time = "2025-09-25T21:32:37.966Z" }, + { url = "https://files.pythonhosted.org/packages/02/9e/e5e9b168be58564121efb3de6859c452fccde0ab093d8438905899a3a483/pyyaml-6.0.3-cp314-cp314-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:b3bc83488de33889877a0f2543ade9f70c67d66d9ebb4ac959502e12de895788", size = 836355, upload-time = "2025-09-25T21:32:39.178Z" }, + { url = "https://files.pythonhosted.org/packages/88/f9/16491d7ed2a919954993e48aa941b200f38040928474c9e85ea9e64222c3/pyyaml-6.0.3-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:c458b6d084f9b935061bc36216e8a69a7e293a2f1e68bf956dcd9e6cbcd143f5", size = 794175, upload-time = "2025-09-25T21:32:40.865Z" }, + { url = "https://files.pythonhosted.org/packages/dd/3f/5989debef34dc6397317802b527dbbafb2b4760878a53d4166579111411e/pyyaml-6.0.3-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:7c6610def4f163542a622a73fb39f534f8c101d690126992300bf3207eab9764", size = 755228, upload-time = "2025-09-25T21:32:42.084Z" }, + { url = "https://files.pythonhosted.org/packages/d7/ce/af88a49043cd2e265be63d083fc75b27b6ed062f5f9fd6cdc223ad62f03e/pyyaml-6.0.3-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:5190d403f121660ce8d1d2c1bb2ef1bd05b5f68533fc5c2ea899bd15f4399b35", size = 789194, upload-time = "2025-09-25T21:32:43.362Z" }, + { url = "https://files.pythonhosted.org/packages/23/20/bb6982b26a40bb43951265ba29d4c246ef0ff59c9fdcdf0ed04e0687de4d/pyyaml-6.0.3-cp314-cp314-win_amd64.whl", hash = "sha256:4a2e8cebe2ff6ab7d1050ecd59c25d4c8bd7e6f400f5f82b96557ac0abafd0ac", size = 156429, upload-time = "2025-09-25T21:32:57.844Z" }, + { url = "https://files.pythonhosted.org/packages/f4/f4/a4541072bb9422c8a883ab55255f918fa378ecf083f5b85e87fc2b4eda1b/pyyaml-6.0.3-cp314-cp314-win_arm64.whl", hash = "sha256:93dda82c9c22deb0a405ea4dc5f2d0cda384168e466364dec6255b293923b2f3", size = 143912, upload-time = "2025-09-25T21:32:59.247Z" }, + { url = "https://files.pythonhosted.org/packages/7c/f9/07dd09ae774e4616edf6cda684ee78f97777bdd15847253637a6f052a62f/pyyaml-6.0.3-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:02893d100e99e03eda1c8fd5c441d8c60103fd175728e23e431db1b589cf5ab3", size = 189108, upload-time = "2025-09-25T21:32:44.377Z" }, + { url = "https://files.pythonhosted.org/packages/4e/78/8d08c9fb7ce09ad8c38ad533c1191cf27f7ae1effe5bb9400a46d9437fcf/pyyaml-6.0.3-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:c1ff362665ae507275af2853520967820d9124984e0f7466736aea23d8611fba", size = 183641, upload-time = "2025-09-25T21:32:45.407Z" }, + { url = "https://files.pythonhosted.org/packages/7b/5b/3babb19104a46945cf816d047db2788bcaf8c94527a805610b0289a01c6b/pyyaml-6.0.3-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:6adc77889b628398debc7b65c073bcb99c4a0237b248cacaf3fe8a557563ef6c", size = 831901, upload-time = "2025-09-25T21:32:48.83Z" }, + { url = "https://files.pythonhosted.org/packages/8b/cc/dff0684d8dc44da4d22a13f35f073d558c268780ce3c6ba1b87055bb0b87/pyyaml-6.0.3-cp314-cp314t-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:a80cb027f6b349846a3bf6d73b5e95e782175e52f22108cfa17876aaeff93702", size = 861132, upload-time = "2025-09-25T21:32:50.149Z" }, + { url = "https://files.pythonhosted.org/packages/b1/5e/f77dc6b9036943e285ba76b49e118d9ea929885becb0a29ba8a7c75e29fe/pyyaml-6.0.3-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:00c4bdeba853cc34e7dd471f16b4114f4162dc03e6b7afcc2128711f0eca823c", size = 839261, upload-time = "2025-09-25T21:32:51.808Z" }, + { url = "https://files.pythonhosted.org/packages/ce/88/a9db1376aa2a228197c58b37302f284b5617f56a5d959fd1763fb1675ce6/pyyaml-6.0.3-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:66e1674c3ef6f541c35191caae2d429b967b99e02040f5ba928632d9a7f0f065", size = 805272, upload-time = "2025-09-25T21:32:52.941Z" }, + { url = "https://files.pythonhosted.org/packages/da/92/1446574745d74df0c92e6aa4a7b0b3130706a4142b2d1a5869f2eaa423c6/pyyaml-6.0.3-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:16249ee61e95f858e83976573de0f5b2893b3677ba71c9dd36b9cf8be9ac6d65", size = 829923, upload-time = "2025-09-25T21:32:54.537Z" }, + { url = "https://files.pythonhosted.org/packages/f0/7a/1c7270340330e575b92f397352af856a8c06f230aa3e76f86b39d01b416a/pyyaml-6.0.3-cp314-cp314t-win_amd64.whl", hash = "sha256:4ad1906908f2f5ae4e5a8ddfce73c320c2a1429ec52eafd27138b7f1cbe341c9", size = 174062, upload-time = "2025-09-25T21:32:55.767Z" }, + { url = "https://files.pythonhosted.org/packages/f1/12/de94a39c2ef588c7e6455cfbe7343d3b2dc9d6b6b2f40c4c6565744c873d/pyyaml-6.0.3-cp314-cp314t-win_arm64.whl", hash = "sha256:ebc55a14a21cb14062aa4162f906cd962b28e2e9ea38f9b4391244cd8de4ae0b", size = 149341, upload-time = "2025-09-25T21:32:56.828Z" }, +] + +[[package]] +name = "requests" +version = "2.32.5" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "certifi" }, + { name = "charset-normalizer" }, + { name = "idna" }, + { name = "urllib3" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/c9/74/b3ff8e6c8446842c3f5c837e9c3dfcfe2018ea6ecef224c710c85ef728f4/requests-2.32.5.tar.gz", hash = "sha256:dbba0bac56e100853db0ea71b82b4dfd5fe2bf6d3754a8893c3af500cec7d7cf", size = 134517, upload-time = "2025-08-18T20:46:02.573Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/1e/db/4254e3eabe8020b458f1a747140d32277ec7a271daf1d235b70dc0b4e6e3/requests-2.32.5-py3-none-any.whl", hash = "sha256:2462f94637a34fd532264295e186976db0f5d453d1cdd31473c85a6a161affb6", size = 64738, upload-time = "2025-08-18T20:46:00.542Z" }, +] + +[[package]] +name = "requests-oauthlib" +version = "2.0.0" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "oauthlib" }, + { name = "requests" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/42/f2/05f29bc3913aea15eb670be136045bf5c5bbf4b99ecb839da9b422bb2c85/requests-oauthlib-2.0.0.tar.gz", hash = "sha256:b3dffaebd884d8cd778494369603a9e7b58d29111bf6b41bdc2dcd87203af4e9", size = 55650, upload-time = "2024-03-22T20:32:29.939Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/3b/5d/63d4ae3b9daea098d5d6f5da83984853c1bbacd5dc826764b249fe119d24/requests_oauthlib-2.0.0-py2.py3-none-any.whl", hash = "sha256:7dd8a5c40426b779b0868c404bdef9768deccf22749cde15852df527e6269b36", size = 24179, upload-time = "2024-03-22T20:32:28.055Z" }, +] + +[[package]] +name = "six" +version = "1.17.0" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/94/e7/b2c673351809dca68a0e064b6af791aa332cf192da575fd474ed7d6f16a2/six-1.17.0.tar.gz", hash = "sha256:ff70335d468e7eb6ec65b95b99d3a2836546063f63acc5171de367e834932a81", size = 34031, upload-time = "2024-12-04T17:35:28.174Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/b7/ce/149a00dd41f10bc29e5921b496af8b574d8413afcd5e30dfa0ed46c2cc5e/six-1.17.0-py2.py3-none-any.whl", hash = "sha256:4721f391ed90541fddacab5acf947aa0d3dc7d27b2e1e8eda2be8970586c3274", size = 11050, upload-time = "2024-12-04T17:35:26.475Z" }, +] + +[[package]] +name = "tomli" +version = "2.4.0" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/82/30/31573e9457673ab10aa432461bee537ce6cef177667deca369efb79df071/tomli-2.4.0.tar.gz", hash = "sha256:aa89c3f6c277dd275d8e243ad24f3b5e701491a860d5121f2cdd399fbb31fc9c", size = 17477, upload-time = "2026-01-11T11:22:38.165Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/3c/d9/3dc2289e1f3b32eb19b9785b6a006b28ee99acb37d1d47f78d4c10e28bf8/tomli-2.4.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:b5ef256a3fd497d4973c11bf142e9ed78b150d36f5773f1ca6088c230ffc5867", size = 153663, upload-time = "2026-01-11T11:21:45.27Z" }, + { url = "https://files.pythonhosted.org/packages/51/32/ef9f6845e6b9ca392cd3f64f9ec185cc6f09f0a2df3db08cbe8809d1d435/tomli-2.4.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:5572e41282d5268eb09a697c89a7bee84fae66511f87533a6f88bd2f7b652da9", size = 148469, upload-time = "2026-01-11T11:21:46.873Z" }, + { url = "https://files.pythonhosted.org/packages/d6/c2/506e44cce89a8b1b1e047d64bd495c22c9f71f21e05f380f1a950dd9c217/tomli-2.4.0-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:551e321c6ba03b55676970b47cb1b73f14a0a4dce6a3e1a9458fd6d921d72e95", size = 236039, upload-time = "2026-01-11T11:21:48.503Z" }, + { url = "https://files.pythonhosted.org/packages/b3/40/e1b65986dbc861b7e986e8ec394598187fa8aee85b1650b01dd925ca0be8/tomli-2.4.0-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:5e3f639a7a8f10069d0e15408c0b96a2a828cfdec6fca05296ebcdcc28ca7c76", size = 243007, upload-time = "2026-01-11T11:21:49.456Z" }, + { url = "https://files.pythonhosted.org/packages/9c/6f/6e39ce66b58a5b7ae572a0f4352ff40c71e8573633deda43f6a379d56b3e/tomli-2.4.0-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:1b168f2731796b045128c45982d3a4874057626da0e2ef1fdd722848b741361d", size = 240875, upload-time = "2026-01-11T11:21:50.755Z" }, + { url = "https://files.pythonhosted.org/packages/aa/ad/cb089cb190487caa80204d503c7fd0f4d443f90b95cf4ef5cf5aa0f439b0/tomli-2.4.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:133e93646ec4300d651839d382d63edff11d8978be23da4cc106f5a18b7d0576", size = 246271, upload-time = "2026-01-11T11:21:51.81Z" }, + { url = "https://files.pythonhosted.org/packages/0b/63/69125220e47fd7a3a27fd0de0c6398c89432fec41bc739823bcc66506af6/tomli-2.4.0-cp311-cp311-win32.whl", hash = "sha256:b6c78bdf37764092d369722d9946cb65b8767bfa4110f902a1b2542d8d173c8a", size = 96770, upload-time = "2026-01-11T11:21:52.647Z" }, + { url = "https://files.pythonhosted.org/packages/1e/0d/a22bb6c83f83386b0008425a6cd1fa1c14b5f3dd4bad05e98cf3dbbf4a64/tomli-2.4.0-cp311-cp311-win_amd64.whl", hash = "sha256:d3d1654e11d724760cdb37a3d7691f0be9db5fbdaef59c9f532aabf87006dbaa", size = 107626, upload-time = "2026-01-11T11:21:53.459Z" }, + { url = "https://files.pythonhosted.org/packages/2f/6d/77be674a3485e75cacbf2ddba2b146911477bd887dda9d8c9dfb2f15e871/tomli-2.4.0-cp311-cp311-win_arm64.whl", hash = "sha256:cae9c19ed12d4e8f3ebf46d1a75090e4c0dc16271c5bce1c833ac168f08fb614", size = 94842, upload-time = "2026-01-11T11:21:54.831Z" }, + { url = "https://files.pythonhosted.org/packages/3c/43/7389a1869f2f26dba52404e1ef13b4784b6b37dac93bac53457e3ff24ca3/tomli-2.4.0-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:920b1de295e72887bafa3ad9f7a792f811847d57ea6b1215154030cf131f16b1", size = 154894, upload-time = "2026-01-11T11:21:56.07Z" }, + { url = "https://files.pythonhosted.org/packages/e9/05/2f9bf110b5294132b2edf13fe6ca6ae456204f3d749f623307cbb7a946f2/tomli-2.4.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:7d6d9a4aee98fac3eab4952ad1d73aee87359452d1c086b5ceb43ed02ddb16b8", size = 149053, upload-time = "2026-01-11T11:21:57.467Z" }, + { url = "https://files.pythonhosted.org/packages/e8/41/1eda3ca1abc6f6154a8db4d714a4d35c4ad90adc0bcf700657291593fbf3/tomli-2.4.0-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:36b9d05b51e65b254ea6c2585b59d2c4cb91c8a3d91d0ed0f17591a29aaea54a", size = 243481, upload-time = "2026-01-11T11:21:58.661Z" }, + { url = "https://files.pythonhosted.org/packages/d2/6d/02ff5ab6c8868b41e7d4b987ce2b5f6a51d3335a70aa144edd999e055a01/tomli-2.4.0-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:1c8a885b370751837c029ef9bc014f27d80840e48bac415f3412e6593bbc18c1", size = 251720, upload-time = "2026-01-11T11:22:00.178Z" }, + { url = "https://files.pythonhosted.org/packages/7b/57/0405c59a909c45d5b6f146107c6d997825aa87568b042042f7a9c0afed34/tomli-2.4.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:8768715ffc41f0008abe25d808c20c3d990f42b6e2e58305d5da280ae7d1fa3b", size = 247014, upload-time = "2026-01-11T11:22:01.238Z" }, + { url = "https://files.pythonhosted.org/packages/2c/0e/2e37568edd944b4165735687cbaf2fe3648129e440c26d02223672ee0630/tomli-2.4.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:7b438885858efd5be02a9a133caf5812b8776ee0c969fea02c45e8e3f296ba51", size = 251820, upload-time = "2026-01-11T11:22:02.727Z" }, + { url = "https://files.pythonhosted.org/packages/5a/1c/ee3b707fdac82aeeb92d1a113f803cf6d0f37bdca0849cb489553e1f417a/tomli-2.4.0-cp312-cp312-win32.whl", hash = "sha256:0408e3de5ec77cc7f81960c362543cbbd91ef883e3138e81b729fc3eea5b9729", size = 97712, upload-time = "2026-01-11T11:22:03.777Z" }, + { url = "https://files.pythonhosted.org/packages/69/13/c07a9177d0b3bab7913299b9278845fc6eaaca14a02667c6be0b0a2270c8/tomli-2.4.0-cp312-cp312-win_amd64.whl", hash = "sha256:685306e2cc7da35be4ee914fd34ab801a6acacb061b6a7abca922aaf9ad368da", size = 108296, upload-time = "2026-01-11T11:22:04.86Z" }, + { url = "https://files.pythonhosted.org/packages/18/27/e267a60bbeeee343bcc279bb9e8fbed0cbe224bc7b2a3dc2975f22809a09/tomli-2.4.0-cp312-cp312-win_arm64.whl", hash = "sha256:5aa48d7c2356055feef06a43611fc401a07337d5b006be13a30f6c58f869e3c3", size = 94553, upload-time = "2026-01-11T11:22:05.854Z" }, + { url = "https://files.pythonhosted.org/packages/34/91/7f65f9809f2936e1f4ce6268ae1903074563603b2a2bd969ebbda802744f/tomli-2.4.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:84d081fbc252d1b6a982e1870660e7330fb8f90f676f6e78b052ad4e64714bf0", size = 154915, upload-time = "2026-01-11T11:22:06.703Z" }, + { url = "https://files.pythonhosted.org/packages/20/aa/64dd73a5a849c2e8f216b755599c511badde80e91e9bc2271baa7b2cdbb1/tomli-2.4.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:9a08144fa4cba33db5255f9b74f0b89888622109bd2776148f2597447f92a94e", size = 149038, upload-time = "2026-01-11T11:22:07.56Z" }, + { url = "https://files.pythonhosted.org/packages/9e/8a/6d38870bd3d52c8d1505ce054469a73f73a0fe62c0eaf5dddf61447e32fa/tomli-2.4.0-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:c73add4bb52a206fd0c0723432db123c0c75c280cbd67174dd9d2db228ebb1b4", size = 242245, upload-time = "2026-01-11T11:22:08.344Z" }, + { url = "https://files.pythonhosted.org/packages/59/bb/8002fadefb64ab2669e5b977df3f5e444febea60e717e755b38bb7c41029/tomli-2.4.0-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:1fb2945cbe303b1419e2706e711b7113da57b7db31ee378d08712d678a34e51e", size = 250335, upload-time = "2026-01-11T11:22:09.951Z" }, + { url = "https://files.pythonhosted.org/packages/a5/3d/4cdb6f791682b2ea916af2de96121b3cb1284d7c203d97d92d6003e91c8d/tomli-2.4.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:bbb1b10aa643d973366dc2cb1ad94f99c1726a02343d43cbc011edbfac579e7c", size = 245962, upload-time = "2026-01-11T11:22:11.27Z" }, + { url = "https://files.pythonhosted.org/packages/f2/4a/5f25789f9a460bd858ba9756ff52d0830d825b458e13f754952dd15fb7bb/tomli-2.4.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:4cbcb367d44a1f0c2be408758b43e1ffb5308abe0ea222897d6bfc8e8281ef2f", size = 250396, upload-time = "2026-01-11T11:22:12.325Z" }, + { url = "https://files.pythonhosted.org/packages/aa/2f/b73a36fea58dfa08e8b3a268750e6853a6aac2a349241a905ebd86f3047a/tomli-2.4.0-cp313-cp313-win32.whl", hash = "sha256:7d49c66a7d5e56ac959cb6fc583aff0651094ec071ba9ad43df785abc2320d86", size = 97530, upload-time = "2026-01-11T11:22:13.865Z" }, + { url = "https://files.pythonhosted.org/packages/3b/af/ca18c134b5d75de7e8dc551c5234eaba2e8e951f6b30139599b53de9c187/tomli-2.4.0-cp313-cp313-win_amd64.whl", hash = "sha256:3cf226acb51d8f1c394c1b310e0e0e61fecdd7adcb78d01e294ac297dd2e7f87", size = 108227, upload-time = "2026-01-11T11:22:15.224Z" }, + { url = "https://files.pythonhosted.org/packages/22/c3/b386b832f209fee8073c8138ec50f27b4460db2fdae9ffe022df89a57f9b/tomli-2.4.0-cp313-cp313-win_arm64.whl", hash = "sha256:d20b797a5c1ad80c516e41bc1fb0443ddb5006e9aaa7bda2d71978346aeb9132", size = 94748, upload-time = "2026-01-11T11:22:16.009Z" }, + { url = "https://files.pythonhosted.org/packages/f3/c4/84047a97eb1004418bc10bdbcfebda209fca6338002eba2dc27cc6d13563/tomli-2.4.0-cp314-cp314-macosx_10_15_x86_64.whl", hash = "sha256:26ab906a1eb794cd4e103691daa23d95c6919cc2fa9160000ac02370cc9dd3f6", size = 154725, upload-time = "2026-01-11T11:22:17.269Z" }, + { url = "https://files.pythonhosted.org/packages/a8/5d/d39038e646060b9d76274078cddf146ced86dc2b9e8bbf737ad5983609a0/tomli-2.4.0-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:20cedb4ee43278bc4f2fee6cb50daec836959aadaf948db5172e776dd3d993fc", size = 148901, upload-time = "2026-01-11T11:22:18.287Z" }, + { url = "https://files.pythonhosted.org/packages/73/e5/383be1724cb30f4ce44983d249645684a48c435e1cd4f8b5cded8a816d3c/tomli-2.4.0-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:39b0b5d1b6dd03684b3fb276407ebed7090bbec989fa55838c98560c01113b66", size = 243375, upload-time = "2026-01-11T11:22:19.154Z" }, + { url = "https://files.pythonhosted.org/packages/31/f0/bea80c17971c8d16d3cc109dc3585b0f2ce1036b5f4a8a183789023574f2/tomli-2.4.0-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:a26d7ff68dfdb9f87a016ecfd1e1c2bacbe3108f4e0f8bcd2228ef9a766c787d", size = 250639, upload-time = "2026-01-11T11:22:20.168Z" }, + { url = "https://files.pythonhosted.org/packages/2c/8f/2853c36abbb7608e3f945d8a74e32ed3a74ee3a1f468f1ffc7d1cb3abba6/tomli-2.4.0-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:20ffd184fb1df76a66e34bd1b36b4a4641bd2b82954befa32fe8163e79f1a702", size = 246897, upload-time = "2026-01-11T11:22:21.544Z" }, + { url = "https://files.pythonhosted.org/packages/49/f0/6c05e3196ed5337b9fe7ea003e95fd3819a840b7a0f2bf5a408ef1dad8ed/tomli-2.4.0-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:75c2f8bbddf170e8effc98f5e9084a8751f8174ea6ccf4fca5398436e0320bc8", size = 254697, upload-time = "2026-01-11T11:22:23.058Z" }, + { url = "https://files.pythonhosted.org/packages/f3/f5/2922ef29c9f2951883525def7429967fc4d8208494e5ab524234f06b688b/tomli-2.4.0-cp314-cp314-win32.whl", hash = "sha256:31d556d079d72db7c584c0627ff3a24c5d3fb4f730221d3444f3efb1b2514776", size = 98567, upload-time = "2026-01-11T11:22:24.033Z" }, + { url = "https://files.pythonhosted.org/packages/7b/31/22b52e2e06dd2a5fdbc3ee73226d763b184ff21fc24e20316a44ccc4d96b/tomli-2.4.0-cp314-cp314-win_amd64.whl", hash = "sha256:43e685b9b2341681907759cf3a04e14d7104b3580f808cfde1dfdb60ada85475", size = 108556, upload-time = "2026-01-11T11:22:25.378Z" }, + { url = "https://files.pythonhosted.org/packages/48/3d/5058dff3255a3d01b705413f64f4306a141a8fd7a251e5a495e3f192a998/tomli-2.4.0-cp314-cp314-win_arm64.whl", hash = "sha256:3d895d56bd3f82ddd6faaff993c275efc2ff38e52322ea264122d72729dca2b2", size = 96014, upload-time = "2026-01-11T11:22:26.138Z" }, + { url = "https://files.pythonhosted.org/packages/b8/4e/75dab8586e268424202d3a1997ef6014919c941b50642a1682df43204c22/tomli-2.4.0-cp314-cp314t-macosx_10_15_x86_64.whl", hash = "sha256:5b5807f3999fb66776dbce568cc9a828544244a8eb84b84b9bafc080c99597b9", size = 163339, upload-time = "2026-01-11T11:22:27.143Z" }, + { url = "https://files.pythonhosted.org/packages/06/e3/b904d9ab1016829a776d97f163f183a48be6a4deb87304d1e0116a349519/tomli-2.4.0-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:c084ad935abe686bd9c898e62a02a19abfc9760b5a79bc29644463eaf2840cb0", size = 159490, upload-time = "2026-01-11T11:22:28.399Z" }, + { url = "https://files.pythonhosted.org/packages/e3/5a/fc3622c8b1ad823e8ea98a35e3c632ee316d48f66f80f9708ceb4f2a0322/tomli-2.4.0-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:0f2e3955efea4d1cfbcb87bc321e00dc08d2bcb737fd1d5e398af111d86db5df", size = 269398, upload-time = "2026-01-11T11:22:29.345Z" }, + { url = "https://files.pythonhosted.org/packages/fd/33/62bd6152c8bdd4c305ad9faca48f51d3acb2df1f8791b1477d46ff86e7f8/tomli-2.4.0-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:0e0fe8a0b8312acf3a88077a0802565cb09ee34107813bba1c7cd591fa6cfc8d", size = 276515, upload-time = "2026-01-11T11:22:30.327Z" }, + { url = "https://files.pythonhosted.org/packages/4b/ff/ae53619499f5235ee4211e62a8d7982ba9e439a0fb4f2f351a93d67c1dd2/tomli-2.4.0-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:413540dce94673591859c4c6f794dfeaa845e98bf35d72ed59636f869ef9f86f", size = 273806, upload-time = "2026-01-11T11:22:32.56Z" }, + { url = "https://files.pythonhosted.org/packages/47/71/cbca7787fa68d4d0a9f7072821980b39fbb1b6faeb5f5cf02f4a5559fa28/tomli-2.4.0-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:0dc56fef0e2c1c470aeac5b6ca8cc7b640bb93e92d9803ddaf9ea03e198f5b0b", size = 281340, upload-time = "2026-01-11T11:22:33.505Z" }, + { url = "https://files.pythonhosted.org/packages/f5/00/d595c120963ad42474cf6ee7771ad0d0e8a49d0f01e29576ee9195d9ecdf/tomli-2.4.0-cp314-cp314t-win32.whl", hash = "sha256:d878f2a6707cc9d53a1be1414bbb419e629c3d6e67f69230217bb663e76b5087", size = 108106, upload-time = "2026-01-11T11:22:34.451Z" }, + { url = "https://files.pythonhosted.org/packages/de/69/9aa0c6a505c2f80e519b43764f8b4ba93b5a0bbd2d9a9de6e2b24271b9a5/tomli-2.4.0-cp314-cp314t-win_amd64.whl", hash = "sha256:2add28aacc7425117ff6364fe9e06a183bb0251b03f986df0e78e974047571fd", size = 120504, upload-time = "2026-01-11T11:22:35.764Z" }, + { url = "https://files.pythonhosted.org/packages/b3/9f/f1668c281c58cfae01482f7114a4b88d345e4c140386241a1a24dcc9e7bc/tomli-2.4.0-cp314-cp314t-win_arm64.whl", hash = "sha256:2b1e3b80e1d5e52e40e9b924ec43d81570f0e7d09d11081b797bc4692765a3d4", size = 99561, upload-time = "2026-01-11T11:22:36.624Z" }, + { url = "https://files.pythonhosted.org/packages/23/d1/136eb2cb77520a31e1f64cbae9d33ec6df0d78bdf4160398e86eec8a8754/tomli-2.4.0-py3-none-any.whl", hash = "sha256:1f776e7d669ebceb01dee46484485f43a4048746235e683bcdffacdf1fb4785a", size = 14477, upload-time = "2026-01-11T11:22:37.446Z" }, +] + +[[package]] +name = "typing-extensions" +version = "4.15.0" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/72/94/1a15dd82efb362ac84269196e94cf00f187f7ed21c242792a923cdb1c61f/typing_extensions-4.15.0.tar.gz", hash = "sha256:0cea48d173cc12fa28ecabc3b837ea3cf6f38c6d1136f85cbaaf598984861466", size = 109391, upload-time = "2025-08-25T13:49:26.313Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/18/67/36e9267722cc04a6b9f15c7f3441c2363321a3ea07da7ae0c0707beb2a9c/typing_extensions-4.15.0-py3-none-any.whl", hash = "sha256:f0fa19c6845758ab08074a0cfa8b7aecb71c999ca73d62883bc25cc018c4e548", size = 44614, upload-time = "2025-08-25T13:49:24.86Z" }, +] + +[[package]] +name = "urllib3" +version = "2.6.3" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/c7/24/5f1b3bdffd70275f6661c76461e25f024d5a38a46f04aaca912426a2b1d3/urllib3-2.6.3.tar.gz", hash = "sha256:1b62b6884944a57dbe321509ab94fd4d3b307075e0c2eae991ac71ee15ad38ed", size = 435556, upload-time = "2026-01-07T16:24:43.925Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/39/08/aaaad47bc4e9dc8c725e68f9d04865dbcb2052843ff09c97b08904852d84/urllib3-2.6.3-py3-none-any.whl", hash = "sha256:bf272323e553dfb2e87d9bfd225ca7b0f467b919d7bbd355436d3fd37cb0acd4", size = 131584, upload-time = "2026-01-07T16:24:42.685Z" }, +] + +[[package]] +name = "websocket-client" +version = "1.9.0" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/2c/41/aa4bf9664e4cda14c3b39865b12251e8e7d239f4cd0e3cc1b6c2ccde25c1/websocket_client-1.9.0.tar.gz", hash = "sha256:9e813624b6eb619999a97dc7958469217c3176312b3a16a4bd1bc7e08a46ec98", size = 70576, upload-time = "2025-10-07T21:16:36.495Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/34/db/b10e48aa8fff7407e67470363eac595018441cf32d5e1001567a7aeba5d2/websocket_client-1.9.0-py3-none-any.whl", hash = "sha256:af248a825037ef591efbf6ed20cc5faa03d3b47b9e5a2230a529eeee1c1fc3ef", size = 82616, upload-time = "2025-10-07T21:16:34.951Z" }, +] + +[[package]] +name = "xqueue-watcher" +version = "0.3" +source = { editable = "." } +dependencies = [ + { name = "docker" }, + { name = "kubernetes" }, + { name = "opentelemetry-api" }, + { name = "opentelemetry-exporter-otlp-proto-http" }, + { name = "opentelemetry-sdk" }, + { name = "requests" }, +] + +[package.optional-dependencies] +codejail = [ + { name = "edx-codejail" }, +] + +[package.dev-dependencies] +dev = [ + { name = "coverage" }, + { name = "edx-codejail" }, + { name = "mock" }, + { name = "pytest-cov" }, +] + +[package.metadata] +requires-dist = [ + { name = "docker", specifier = ">=7.0.0" }, + { name = "edx-codejail", marker = "extra == 'codejail'" }, + { name = "kubernetes", specifier = ">=29.0.0" }, + { name = "opentelemetry-api" }, + { name = "opentelemetry-exporter-otlp-proto-http" }, + { name = "opentelemetry-sdk" }, + { name = "requests" }, +] +provides-extras = ["codejail"] + +[package.metadata.requires-dev] +dev = [ + { name = "coverage" }, + { name = "edx-codejail" }, + { name = "mock" }, + { name = "pytest-cov" }, +] + +[[package]] +name = "zipp" +version = "3.23.0" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/e3/02/0f2892c661036d50ede074e376733dca2ae7c6eb617489437771209d4180/zipp-3.23.0.tar.gz", hash = "sha256:a07157588a12518c9d4034df3fbbee09c814741a33ff63c05fa29d26a2404166", size = 25547, upload-time = "2025-06-08T17:06:39.4Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/2e/54/647ade08bf0db230bfea292f893923872fd20be6ac6f53b2b936ba839d75/zipp-3.23.0-py3-none-any.whl", hash = "sha256:071652d6115ed432f5ce1d34c336c0adfd6a884660d1e9712a256d3d3bd4b14e", size = 10276, upload-time = "2025-06-08T17:06:38.034Z" }, +] diff --git a/xqueue_watcher/client.py b/xqueue_watcher/client.py index b72a724..fbae997 100644 --- a/xqueue_watcher/client.py +++ b/xqueue_watcher/client.py @@ -1,6 +1,7 @@ import time import json import logging +import os import requests from requests.auth import HTTPBasicAuth import threading @@ -9,6 +10,15 @@ log = logging.getLogger(__name__) +# TLS verification is on by default. Set XQWATCHER_VERIFY_TLS=false to disable +# (e.g., for self-signed certificates in development). Never disable in production. +_VERIFY_TLS = os.environ.get("XQWATCHER_VERIFY_TLS", "true").strip().lower() not in ("0", "false", "no") +if not _VERIFY_TLS: + log.warning( + "TLS certificate verification is DISABLED (XQWATCHER_VERIFY_TLS=false). " + "This must not be used in production." + ) + class XQueueClient: def __init__(self, @@ -81,6 +91,7 @@ def _request(self, method, uri, **kwargs): auth=self.http_basic_auth, timeout=self.requests_timeout, allow_redirects=self.follow_client_redirects, + verify=_VERIFY_TLS, **kwargs ) except requests.exceptions.ConnectionError as e: @@ -106,8 +117,8 @@ def _login(self): if self.username is None: return True url = self.xqueue_server + '/xqueue/login/' - log.debug(f"Trying to login to {url} with user: {self.username} and pass {self.password}") - response = self.session.request('post', url, auth=self.http_basic_auth, data={ + log.debug("Trying to login to %s with user: %s", url, self.username) + response = self.session.request('post', url, auth=self.http_basic_auth, verify=_VERIFY_TLS, data={ 'username': self.username, 'password': self.password, }) @@ -145,7 +156,7 @@ def _handle_submission(self, content): if result: reply = {'xqueue_body': json.dumps(result), 'xqueue_header': content['xqueue_header']} - status, message = self._request('post', '/xqueue/put_result/', data=reply, verify=False) + status, message = self._request('post', '/xqueue/put_result/', data=reply) if not status: log.error('Failure for %r -> %r', reply, message) success.append(status) @@ -177,10 +188,9 @@ def run(self): num_tries += 1 time.sleep(self.login_poll_interval) if not self._login(): - log.error("Still could not log in to %s (%s:%s) tries: %d", + log.error("Still could not log in to %s (user: %s) tries: %d", self.queue_name, self.username, - self.password, num_tries) else: break diff --git a/xqueue_watcher/containergrader.py b/xqueue_watcher/containergrader.py new file mode 100644 index 0000000..7627f3c --- /dev/null +++ b/xqueue_watcher/containergrader.py @@ -0,0 +1,679 @@ +""" +A grader implementation that executes student code inside an isolated container. + +Supports two backends: + - "kubernetes": creates a batch/v1 Job per submission (production) + - "docker": runs a local Docker container (local dev / CI) + +This is the recommended replacement for JailedGrader on Kubernetes deployments. +The Kubernetes backend applies a defence-in-depth security posture: + - Non-root user (UID 1000), read-only root filesystem + - All Linux capabilities dropped + - RuntimeDefault seccomp profile (restricts available syscalls) + - /tmp emptyDir with a size cap (prevents disk exhaustion) + - No service-account token auto-mounted + - CPU, memory, and PID limits to prevent resource exhaustion + +Operators should also ensure: + - The grader namespace enforces the Kubernetes "restricted" Pod Security Standard + - A NetworkPolicy is applied to prevent egress from grading pods (see deploy/) + - Grader images are signed and scanned; use digest-pinned references in production + - The TTL controller is enabled so orphaned Jobs are reaped automatically +""" + +import json +import logging +import os +import random +import threading +import time +import uuid +from pathlib import Path + +from .grader import Grader +from .env_settings import get_container_grader_defaults + + +_BACKEND_KUBERNETES = "kubernetes" +_BACKEND_DOCKER = "docker" +_SUPPORTED_BACKENDS = (_BACKEND_KUBERNETES, _BACKEND_DOCKER) + +# Maximum submission size (bytes). Submissions larger than this are rejected +# before a container is launched to prevent etcd object-size overflows (K8s +# limit ~1.5 MB) and resource-exhaustion via very large env vars. +_SUBMISSION_SIZE_WARN_BYTES = 32 * 1024 # 32 KB +_SUBMISSION_SIZE_LIMIT_BYTES = int( + os.environ.get("XQWATCHER_SUBMISSION_SIZE_LIMIT", str(1024 * 1024)) # 1 MB default +) + +log = logging.getLogger(__name__) + + +class ImageDigestPoller: + """ + Background thread that periodically resolves an image tag to its digest. + + Resolves ``repo:tag`` → ``repo@sha256:…`` by querying the Docker registry + via the Docker SDK's ``inspect_distribution`` API (no image pull required). + The resolved reference is cached and refreshed every ``poll_interval`` seconds. + + Thread-safe: ``resolved_image`` may be read from any thread at any time. + + If the initial resolution fails, ``resolved_image`` returns the original + unresolved reference so that grading can proceed with ``imagePullPolicy: + Always`` as a safe fallback. + """ + + def __init__(self, image: str, poll_interval: int = 300) -> None: + self._image = image + self._poll_interval = poll_interval + self._resolved: str | None = None + self._lock = threading.Lock() + self._thread = threading.Thread( + target=self._poll_loop, name=f"digest-poller-{image}", daemon=True + ) + self._thread.start() + + @property + def resolved_image(self) -> str: + with self._lock: + return self._resolved if self._resolved is not None else self._image + + def _poll_loop(self) -> None: + while True: + self._refresh() + time.sleep(self._poll_interval) + + def _refresh(self) -> None: + try: + import docker as docker_sdk + + client = docker_sdk.APIClient() + info = client.inspect_distribution(self._image) + digest = info["Descriptor"]["digest"] + # Strip digest ref (image@sha256:...) then strip tag if present. + # A tag is the last colon-separated segment that appears after the + # last slash (so registry ports like registry:5000/... are preserved). + base = self._image.split("@")[0] + last_colon = base.rfind(":") + last_slash = base.rfind("/") + repo = base[:last_colon] if last_colon > last_slash else base + resolved = f"{repo}@{digest}" + with self._lock: + if self._resolved != resolved: + log.info( + "Resolved grader image %s → %s", self._image, resolved + ) + self._resolved = resolved + except Exception: + log.warning( + "Failed to resolve digest for grader image %s; " + "will retry in %ds", + self._image, + self._poll_interval, + exc_info=True, + ) + + +class ContainerGrader(Grader): + """ + Grades student submissions by running them inside an isolated container. + + The grader scripts and staff answer are baked into the course-specific grader + image. The container runs the complete grading pipeline (preprocessing, running + both the staff answer and the student submission, comparing results) and returns + a JSON grade result. The watcher pod does not need local access to grader files. + + Configuration (passed as KWARGS in the conf.d JSON handler config): + + grader_root - Path to the grader directory inside the container image. + For the Docker backend this is bind-mounted from the host; + for Kubernetes the scripts are baked into the image. + image - Docker image to run. Should extend grader-base and include + all course-specific grader scripts and dependencies. + backend - "kubernetes" or "docker". Defaults to + XQWATCHER_GRADER_BACKEND env var, or "kubernetes". + namespace - Kubernetes namespace to create Jobs in. Defaults to + XQWATCHER_GRADER_NAMESPACE env var, or "default". + cpu_limit - CPU limit for the grading container. Defaults to + XQWATCHER_GRADER_CPU_LIMIT env var, or "500m". + memory_limit - Memory limit for the grading container. Defaults to + XQWATCHER_GRADER_MEMORY_LIMIT env var, or "256Mi". + timeout - Maximum wall-clock seconds a grading job may run. Defaults + to XQWATCHER_GRADER_TIMEOUT env var, or 20. + docker_host_grader_root - Host-side absolute path corresponding to grader_root + inside the watcher container. Required when xqueue-watcher + itself runs in a container (e.g. via docker-compose with the + Docker socket mounted): the Docker daemon interprets + bind-mount sources relative to the *host* filesystem, so + without this mapping the grader directory will not be found. + Example: if ``./data`` is mounted at ``/graders`` in the + watcher container, set this to the absolute host path of + ``./data``. Defaults to XQWATCHER_DOCKER_HOST_GRADER_ROOT + env var, or None (watcher runs directly on the host). + image_pull_policy - Kubernetes imagePullPolicy for grading Jobs: "Always", + "IfNotPresent", or "Never". When None (default) the policy + is inferred from the image reference: "IfNotPresent" for + digest-pinned refs (``repo@sha256:…``), "Always" for + tag-based refs (no digest present). + poll_image_digest - When True and ``image`` is a tag-based reference, start + a background ``ImageDigestPoller`` that periodically + resolves the tag to its current digest. Grading Jobs will + use the most recently resolved ``repo@digest`` reference, + which ensures Kubernetes nodes always pull the latest + pushed image without relying on ``imagePullPolicy: Always`` + for every pod. Default: False. + digest_poll_interval - Seconds between digest resolution polls when + ``poll_image_digest`` is True. Default: 300. + """ + + def __init__( + self, + grader_root, + image, + backend=None, + namespace=None, + cpu_limit=None, + memory_limit=None, + timeout=None, + image_pull_policy=None, + poll_image_digest=False, + digest_poll_interval=300, + docker_host_grader_root=None, + **kwargs, + ): + env_defaults = get_container_grader_defaults() + resolved_backend = backend if backend is not None else env_defaults["backend"] + if resolved_backend not in _SUPPORTED_BACKENDS: + raise ValueError( + f"Unsupported backend {resolved_backend!r}. Choose from {_SUPPORTED_BACKENDS}." + ) + super().__init__(grader_root=grader_root, fork_per_item=False, **kwargs) + self.image = image + self.backend = resolved_backend + self.namespace = namespace if namespace is not None else env_defaults["namespace"] + self.cpu_limit = cpu_limit if cpu_limit is not None else env_defaults["cpu_limit"] + self.memory_limit = memory_limit if memory_limit is not None else env_defaults["memory_limit"] + self.timeout = timeout if timeout is not None else env_defaults["timeout"] + self.docker_host_grader_root = ( + docker_host_grader_root + if docker_host_grader_root is not None + else env_defaults["docker_host_grader_root"] + ) + + # image_pull_policy: explicit override or auto-detect from image ref. + # Normalise to title-case ("Always", "IfNotPresent", "Never") regardless + # of how the value was supplied in KWARGS — the Kubernetes API is + # case-sensitive and rejects variants like "always" or "ALWAYS". + _policy_map = {p.lower(): p for p in ("Always", "IfNotPresent", "Never")} + if image_pull_policy is not None: + self.image_pull_policy = _policy_map.get( + image_pull_policy.strip().lower(), image_pull_policy.strip() + ) + elif "@sha256:" in image: + self.image_pull_policy = "IfNotPresent" + else: + self.image_pull_policy = "Always" + + # Optional background digest polling for tag-based image references. + self._digest_poller: ImageDigestPoller | None = None + if poll_image_digest and "@sha256:" not in image: + self._digest_poller = ImageDigestPoller( + image=image, poll_interval=digest_poll_interval + ) + log.info( + "Started digest poller for grader image %s (interval=%ds)", + image, + digest_poll_interval, + ) + + # Lazily-initialised Kubernetes API clients (created once per instance + # on first use to avoid per-submission config-load overhead). + self._k8s_lock = threading.Lock() + self._k8s_batch_v1 = None + self._k8s_core_v1 = None + + def _effective_image(self) -> str: + """Return the image reference to use for container execution. + + If a digest poller is active and has resolved a digest, returns the + pinned ``repo@sha256:…`` form. Falls back to the configured tag-based + reference otherwise. + """ + if self._digest_poller is not None: + return self._digest_poller.resolved_image + return self.image + + # ------------------------------------------------------------------ + # Internal: container execution + # ------------------------------------------------------------------ + + def _get_k8s_clients(self): + """Return cached (batch_v1, core_v1) Kubernetes API clients. + + Config is loaded and clients are constructed once per instance the + first time this method is called. Subsequent calls return the cached + objects, avoiding repeated kubeconfig reads on every submission. + """ + if self._k8s_batch_v1 is not None: + return self._k8s_batch_v1, self._k8s_core_v1 + with self._k8s_lock: + if self._k8s_batch_v1 is None: + try: + from kubernetes import client as k8s_client, config as k8s_config + except ImportError: + raise RuntimeError( + "The 'kubernetes' package is required for the kubernetes backend. " + "Install it with: uv add kubernetes" + ) + try: + k8s_config.load_incluster_config() + except k8s_config.ConfigException: + k8s_config.load_kube_config() + self._k8s_batch_v1 = k8s_client.BatchV1Api() + self._k8s_core_v1 = k8s_client.CoreV1Api() + return self._k8s_batch_v1, self._k8s_core_v1 + + def _run(self, grader_path, code, seed, grader_config=None): + """ + Run the complete grading pipeline inside a container. + + The container entrypoint (grader_support.entrypoint) handles: + - Loading the grader module (baked into the image) + - Preprocessing both staff answer and student submission + - Running both through grader_support.run + - Comparing results and returning the final grade JSON + + Returns the raw stdout bytes (JSON grade result). + Raises RuntimeError on timeout or non-zero exit. + """ + # Enforce submission size limits. Very large submissions passed as env + # vars contribute to the Pod object stored in etcd (~1.5 MB limit), and + # can be used for resource-exhaustion attacks. + code_bytes = len(code.encode("utf-8")) + if code_bytes > _SUBMISSION_SIZE_LIMIT_BYTES: + raise ValueError( + f"Submission too large ({code_bytes} bytes). " + f"Maximum allowed size is {_SUBMISSION_SIZE_LIMIT_BYTES} bytes." + ) + if code_bytes > _SUBMISSION_SIZE_WARN_BYTES: + self.log.warning( + "Submission code is large (%d bytes). Very large submissions may " + "exceed Kubernetes API object size limits when passed via env var.", + code_bytes, + ) + if grader_config is None: + grader_config = {} + if self.backend == _BACKEND_KUBERNETES: + return self._run_kubernetes(grader_path, code, seed, grader_config) + return self._run_docker(grader_path, code, seed, grader_config) + + def _run_kubernetes(self, grader_path, code, seed, grader_config): + """Create a Kubernetes Job, wait for it, collect stdout, delete it.""" + from kubernetes import client as k8s_client # noqa: F401 — needed for V1DeleteOptions + + batch_v1, core_v1 = self._get_k8s_clients() + + job_name = f"xqueue-grader-{uuid.uuid4().hex[:12]}" + + job_manifest = self._build_k8s_job(job_name, grader_path, code, seed, grader_config) + + try: + batch_v1.create_namespaced_job(namespace=self.namespace, body=job_manifest) + self.log.debug("Created Job %s", job_name) + + stdout = self._wait_and_collect_k8s( + batch_v1, core_v1, job_name, timeout=self.timeout + ) + return stdout + finally: + try: + batch_v1.delete_namespaced_job( + name=job_name, + namespace=self.namespace, + body=k8s_client.V1DeleteOptions(propagation_policy="Foreground"), + ) + except Exception: + self.log.warning("Failed to delete Job %s", job_name, exc_info=True) + + def _build_k8s_job(self, job_name, grader_path, code, seed, grader_config=None): + """Return a kubernetes Job manifest for the given grading run.""" + from kubernetes import client as k8s_client + + if grader_config is None: + grader_config = {} + + # The entrypoint takes: GRADER_FILE SEED + # The grader scripts are baked into the course-specific image at grader_path. + # working_dir must stay at /grader (the WORKDIR of the base image) so that + # `python -m grader_support.entrypoint` can locate the grader_support package. + grader_abs = str(grader_path) + + return k8s_client.V1Job( + api_version="batch/v1", + kind="Job", + metadata=k8s_client.V1ObjectMeta( + name=job_name, + labels={ + "app.kubernetes.io/component": "xqueue-grader", + "app.kubernetes.io/managed-by": "xqueue-watcher", + }, + ), + spec=k8s_client.V1JobSpec( + backoff_limit=0, + active_deadline_seconds=self.timeout, + ttl_seconds_after_finished=300, + template=k8s_client.V1PodTemplateSpec( + metadata=k8s_client.V1ObjectMeta( + labels={ + "app.kubernetes.io/component": "xqueue-grader", + "app.kubernetes.io/managed-by": "xqueue-watcher", + } + ), + spec=k8s_client.V1PodSpec( + restart_policy="Never", + automount_service_account_token=False, + security_context=k8s_client.V1PodSecurityContext( + run_as_non_root=True, + run_as_user=1000, + seccomp_profile=k8s_client.V1SeccompProfile( + type="RuntimeDefault", + ), + ), + # Grader scripts are baked into the course-specific image + # (no volume mount required). The image extends + # grader_support/Dockerfile.base and includes the grader + # files at the path referenced by grader_abs. + containers=[ + k8s_client.V1Container( + name="grader", + image=self._effective_image(), + image_pull_policy=self.image_pull_policy, + # entrypoint signature: GRADER_FILE SEED + args=[grader_abs, str(seed)], + working_dir="/grader", + env=[ + k8s_client.V1EnvVar( + name="SUBMISSION_CODE", + value=code, + ), + k8s_client.V1EnvVar( + name="GRADER_LANGUAGE", + value=grader_config.get("lang", "en"), + ), + k8s_client.V1EnvVar( + name="HIDE_OUTPUT", + value="1" if grader_config.get("hide_output") else "0", + ), + ], + resources=k8s_client.V1ResourceRequirements( + limits={ + "cpu": self.cpu_limit, + "memory": self.memory_limit, + "pids": "256", + }, + requests={ + "cpu": "100m", + "memory": "64Mi", + }, + ), + security_context=k8s_client.V1SecurityContext( + allow_privilege_escalation=False, + read_only_root_filesystem=True, + capabilities=k8s_client.V1Capabilities(drop=["ALL"]), + seccomp_profile=k8s_client.V1SeccompProfile( + type="RuntimeDefault", + ), + ), + volume_mounts=[ + k8s_client.V1VolumeMount( + name="tmp", + mount_path="/tmp", + ), + ], + ) + ], + volumes=[ + # emptyDir at /tmp is required because read_only_root_filesystem=True + # prevents writes to the root FS; the entrypoint writes the student + # submission to /tmp/submission.py before executing it. + k8s_client.V1Volume( + name="tmp", + empty_dir=k8s_client.V1EmptyDirVolumeSource( + size_limit="50Mi", + ), + ), + ], + ) + ), + ), + ) + + def _wait_and_collect_k8s(self, batch_v1, core_v1, job_name, timeout): + """Poll until the Job completes, then return its pod's stdout bytes.""" + deadline = time.monotonic() + timeout + while time.monotonic() < deadline: + job = batch_v1.read_namespaced_job(name=job_name, namespace=self.namespace) + if job.status.succeeded: + break + if job.status.failed: + raise RuntimeError(f"Grading Job {job_name} failed.") + time.sleep(1) + else: + raise RuntimeError( + f"Grading Job {job_name} exceeded timeout of {timeout}s." + ) + + pods = core_v1.list_namespaced_pod( + namespace=self.namespace, + label_selector=f"job-name={job_name}", + ) + if not pods.items: + raise RuntimeError(f"No pods found for Job {job_name}.") + + pod_name = pods.items[0].metadata.name + # The Kubernetes Python client deserializes the log response body via + # json.loads() then casts to str(), turning valid JSON into Python repr + # (single-quoted dict). Pass _preload_content=False to get the raw + # urllib3 response object and read the bytes directly, bypassing the + # client's deserialisation entirely. + raw = core_v1.read_namespaced_pod_log( + name=pod_name, + namespace=self.namespace, + container="grader", + _preload_content=False, + ) + log = raw.data.decode("utf-8") + # Scan backwards to find the last non-empty line (the JSON result). + # Earlier lines may be stderr interleaved by the Kubernetes log API. + json_line = None + for line in reversed(log.splitlines()): + stripped = line.strip() + if stripped: + json_line = stripped + break + if not json_line: + raise RuntimeError(f"No output from grading pod {pod_name}.") + return json_line.encode("utf-8") + + def _run_docker(self, grader_path, code, seed, grader_config=None): + """Run a local Docker container and return stdout bytes.""" + try: + import docker as docker_sdk + except ImportError: + raise RuntimeError( + "The 'docker' package is required for the docker backend. " + "Install it with: uv add docker" + ) + + if grader_config is None: + grader_config = {} + + grader_dir = str(Path(grader_path).parent.resolve()) + grader_rel = str(Path(grader_path).name) + # Mount the problem directory at /graders/ (not /grader/ which would + # overwrite the base image's grader_support package). Pass the grader + # as an absolute in-container path. + container_grader_path = f"/graders/{grader_rel}" + + # When xqueue-watcher runs inside a container, grader_dir is a + # container-internal path. docker_host_grader_root maps grader_root to + # the equivalent directory on the Docker host so that bind-mounts reach + # the correct location. + if self.docker_host_grader_root: + rel = Path(grader_path).parent.resolve().relative_to( + Path(self.grader_root).resolve() + ) + host_grader_dir = str(Path(self.docker_host_grader_root) / rel) + else: + host_grader_dir = grader_dir + + env = { + "SUBMISSION_CODE": code, + "GRADER_LANGUAGE": grader_config.get("lang", "en"), + "HIDE_OUTPUT": "1" if grader_config.get("hide_output") else "0", + } + + client = docker_sdk.from_env() + try: + # Run detached so we can enforce a wall-clock timeout via container.wait(). + # containers.run() does not accept a timeout argument; using detach=True + # lets us call container.wait(timeout=...) to cap execution time. + container = client.containers.run( + image=self._effective_image(), + # entrypoint signature: GRADER_FILE SEED + command=[container_grader_path, str(seed)], + working_dir="/grader", + environment=env, + volumes={host_grader_dir: {"bind": "/graders", "mode": "ro"}}, + mem_limit=_parse_memory_bytes(self.memory_limit), + nano_cpus=int(_parse_cpu_millis(self.cpu_limit) * 1_000_000), + network_disabled=True, + read_only=True, + detach=True, + stdout=True, + stderr=False, + ) + try: + exit_info = container.wait(timeout=self.timeout) + if exit_info.get("StatusCode", 0) != 0: + stderr = container.logs(stdout=False, stderr=True) + raise RuntimeError( + f"Grading container exited with non-zero status: {exit_info}. " + f"stderr: {stderr[:2000] if stderr else ''}" + ) + result = container.logs(stdout=True, stderr=False) + except Exception as exc: + # Catch ReadTimeout (requests.exceptions.ReadTimeout) from container.wait() + # and any other unexpected error, converting to a clear RuntimeError. + exc_name = type(exc).__name__ + if "Timeout" in exc_name or "timeout" in str(exc).lower(): + raise RuntimeError( + f"Grading container timed out after {self.timeout}s." + ) from exc + raise + finally: + container.remove(force=True) + except docker_sdk.errors.ContainerError as exc: + raise RuntimeError( + f"Grading container exited with error: {exc}" + ) from exc + + return result if isinstance(result, bytes) else result.encode("utf-8") + + # ------------------------------------------------------------------ + # Public grading interface + # ------------------------------------------------------------------ + + def grade(self, grader_path, grader_config, submission): + """ + Grade a student submission by running the full pipeline inside a container. + + The container (grader_support.entrypoint) handles all grading steps: + - Loading the grader module (baked into the image) + - Validating the submission format + - Preprocessing and running the staff answer and student submission + - Comparing results test-by-test + - Returning the final grade as JSON + + Returns a dict with keys: correct, score, errors, tests. + """ + if not isinstance(submission, str): + self.log.warning("Submission is NOT unicode") + + results = { + "errors": [], + "tests": [], + "correct": False, + "score": 0, + } + + if grader_config.get("skip_grader", False): + results["correct"] = True + results["score"] = 1 + self.log.debug("Skipping the grader.") + return results + + seed = str(random.randint(0, 20000)) + + try: + output = self._run(grader_path, submission, seed, grader_config) + self.log.debug( + "Raw container output (%d bytes) for grader %s: %r", + len(output), + grader_path, + output[:4096], + ) + grade_result = json.loads(output.decode("utf-8")) + return grade_result + except json.JSONDecodeError: + self.log.error( + "Failed to parse container output as JSON for grader %s. " + "Raw output (%d bytes): %r", + grader_path, + len(output), + output[:4096], + ) + raise + except Exception: + self.log.exception( + "Grading container failed. grader = %s", grader_path + ) + results["errors"].append( + "There was a problem running your code (Staff debug). " + "Please contact the course staff for assistance." + ) + return results + + + +def _parse_cpu_millis(cpu_str): + """Convert a Kubernetes CPU string like '500m' or '1' to a float of millicores.""" + cpu_str = str(cpu_str).strip() + if cpu_str.endswith("m"): + return float(cpu_str[:-1]) + return float(cpu_str) * 1000 + + +def _parse_memory_bytes(memory_str): + """Convert a Kubernetes/Docker memory string to bytes for the Docker API. + + Handles IEC binary suffixes (Ki, Mi, Gi, Ti) and SI decimal suffixes + (K, M, G, T). Plain integers are returned unchanged. + + Examples: + "256Mi" -> 268435456 + "1Gi" -> 1073741824 + "512M" -> 512000000 + "1024" -> 1024 + """ + s = str(memory_str).strip() + iec = {"Ti": 1024**4, "Gi": 1024**3, "Mi": 1024**2, "Ki": 1024} + si = {"T": 1000**4, "G": 1000**3, "M": 1000**2, "K": 1000} + for suffix, factor in iec.items(): + if s.endswith(suffix): + return int(float(s[: -len(suffix)]) * factor) + for suffix, factor in si.items(): + if s.endswith(suffix): + return int(float(s[: -len(suffix)]) * factor) + return int(s) diff --git a/xqueue_watcher/env_settings.py b/xqueue_watcher/env_settings.py new file mode 100644 index 0000000..e075bdf --- /dev/null +++ b/xqueue_watcher/env_settings.py @@ -0,0 +1,256 @@ +""" +12-factor / Kubernetes-compatible settings for xqueue-watcher. + +All manager configuration values can be supplied via environment variables +using the ``XQWATCHER_`` prefix. This module mirrors the keys defined in +:data:`xqueue_watcher.settings.MANAGER_CONFIG_DEFAULTS` so it can be used +as a drop-in source of configuration alongside or instead of the JSON file +read by :func:`xqueue_watcher.settings.get_manager_config_values`. + +It also provides :func:`configure_logging`, which initialises a structured +stdout logging configuration without requiring a ``logging.json`` file — +suitable for Kubernetes and any 12-factor environment where logs are consumed +from stdout by the container runtime. + +Environment variables +--------------------- +XQWATCHER_LOG_LEVEL + Root log level (default: ``INFO``). Accepts any standard Python level + name: ``DEBUG``, ``INFO``, ``WARNING``, ``ERROR``, ``CRITICAL``. +XQWATCHER_HTTP_BASIC_AUTH + HTTP Basic Auth credentials as ``username:password``. Parsed into a + ``(username, password)`` tuple suitable for ``HTTPBasicAuth(*value)``. + Unset or empty means no authentication (equivalent to ``None``). +XQWATCHER_POLL_TIME + Seconds between liveness checks of client threads (integer, default 10). +XQWATCHER_REQUESTS_TIMEOUT + Timeout in seconds for outbound HTTP requests (integer, default 1). +XQWATCHER_POLL_INTERVAL + Seconds between queue-polling attempts (integer, default 1). +XQWATCHER_LOGIN_POLL_INTERVAL + Seconds between login-retry attempts (integer, default 5). +XQWATCHER_FOLLOW_CLIENT_REDIRECTS + Follow HTTP redirects when ``true`` or ``1``, ignore otherwise + (boolean, default false). +XQWATCHER_VERIFY_TLS + Verify TLS certificates for outbound HTTPS requests when ``true`` or ``1`` + (boolean, default true). Set to ``false`` only in development environments + with self-signed certificates. **Never disable in production.** +XQWATCHER_SUBMISSION_SIZE_LIMIT + Maximum submission size in bytes (integer, default 1048576 = 1 MB). + Submissions larger than this value are rejected before a grading container + is launched. Prevents etcd object-size overflows and resource-exhaustion + attacks via very large environment variables. + +Named XQueue server references (Kubernetes) +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +XQueue server connection details — URL and credentials — are kept in +``xqueue_servers.json`` in the config root. In Kubernetes this file is +the preferred mechanism for injecting secrets: create a Kubernetes Secret +whose keys are ``xqueue_servers.json`` and mount it as a volume into the +config root directory. Queue configs in ``conf.d`` then reference servers +by name using the ``SERVER_REF`` field, keeping credentials out of those +files entirely. + +Example Kubernetes Secret (``stringData`` form):: + + apiVersion: v1 + kind: Secret + metadata: + name: xqueue-servers + stringData: + xqueue_servers.json: | + { + "default": { + "SERVER": "http://xqueue-svc:18040", + "AUTH": ["lms", "s3cr3t"] + } + } + +Mount it alongside the rest of the config:: + + volumes: + - name: xqueue-servers + secret: + secretName: xqueue-servers + volumeMounts: + - name: xqueue-servers + mountPath: /config/xqueue_servers.json + subPath: xqueue_servers.json + +Queue configs in ``conf.d`` can then omit ``SERVER`` and ``AUTH``:: + + { "my-queue": { "SERVER_REF": "default", "CONNECTIONS": 1, ... } } + +ContainerGrader defaults +~~~~~~~~~~~~~~~~~~~~~~~~ +These allow operators to set deployment-wide grader defaults without repeating +them in every conf.d queue JSON file. Individual queue configs may still +override any of these values in their ``KWARGS`` block. + +XQWATCHER_GRADER_BACKEND + Container backend: ``kubernetes`` (default) or ``docker``. +XQWATCHER_GRADER_NAMESPACE + Kubernetes namespace in which grading Jobs are created (default: + ``default``). Ignored by the Docker backend. +XQWATCHER_GRADER_CPU_LIMIT + CPU limit for grading containers in Kubernetes / Docker notation + (default: ``500m``). +XQWATCHER_GRADER_MEMORY_LIMIT + Memory limit for grading containers, e.g. ``256Mi`` (default: ``256Mi``). +XQWATCHER_GRADER_TIMEOUT + Maximum wall-clock seconds a grading job may run (integer, default 20). +XQWATCHER_DOCKER_HOST_GRADER_ROOT + Host-side absolute path that corresponds to ``grader_root`` inside the + watcher container. Required when xqueue-watcher itself runs in a + container with the Docker backend: the Docker daemon interprets + bind-mount source paths relative to the *host* filesystem, not the + watcher container, so without this mapping the grader directory will + not be found. Example: if ``./data`` is mounted at ``/graders`` in the + watcher container, set this to the absolute host path of ``./data`` + (e.g. ``/home/user/project/data``). Unset by default (watcher runs + directly on the host). +""" + +import logging +import logging.config +import os + +from .settings import MANAGER_CONFIG_DEFAULTS + +_PREFIX = "XQWATCHER_" + +_LOG_FORMAT = "%(asctime)s %(levelname)s %(process)d [%(name)s] %(filename)s:%(lineno)d - %(message)s" + + +# --------------------------------------------------------------------------- +# Internal helpers +# --------------------------------------------------------------------------- + +def _get_bool(name: str, default: bool) -> bool: + raw = os.environ.get(name, "").strip().lower() + if raw in ("1", "true", "yes"): + return True + if raw in ("0", "false", "no"): + return False + return default + + +def _get_int(name: str, default: int) -> int: + raw = os.environ.get(name, "").strip() + if raw: + return int(raw) + return default + + +def _get_str(name: str, default: str | None) -> str | None: + raw = os.environ.get(name, "").strip() + return raw if raw else default + + +def _get_auth(name: str, default): + raw = os.environ.get(name, "").strip() + if not raw: + return default + username, _, password = raw.partition(":") + return (username, password) + + +# --------------------------------------------------------------------------- +# Public API +# --------------------------------------------------------------------------- + +def configure_logging() -> None: + """ + Initialise logging to stdout using a level read from the environment. + + This is the 12-factor / Kubernetes alternative to supplying a + ``logging.json`` file. All log records are written to ``stdout`` so they + are captured by the container runtime and forwarded to whatever log + aggregation system is in use (e.g. Fluentd, Loki, CloudWatch). + + The root log level defaults to ``INFO`` and can be overridden via the + ``XQWATCHER_LOG_LEVEL`` environment variable. The ``requests`` and + ``urllib3`` libraries are pinned to ``WARNING`` to suppress noisy + HTTP-level debug output. + """ + level = os.environ.get(f"{_PREFIX}LOG_LEVEL", "INFO").strip().upper() + + logging.config.dictConfig({ + "version": 1, + "disable_existing_loggers": False, + "formatters": { + "standard": { + "format": _LOG_FORMAT, + }, + }, + "handlers": { + "stdout": { + "class": "logging.StreamHandler", + "stream": "ext://sys.stdout", + "formatter": "standard", + "level": level, + }, + }, + "root": { + "handlers": ["stdout"], + "level": level, + }, + "loggers": { + "requests": {"level": "WARNING"}, + "urllib3": {"level": "WARNING"}, + }, + }) + + +def get_manager_config_from_env() -> dict: + """ + Return manager configuration populated from environment variables. + + Values not present in the environment fall back to + :data:`~xqueue_watcher.settings.MANAGER_CONFIG_DEFAULTS`. + """ + return { + "HTTP_BASIC_AUTH": _get_auth( + f"{_PREFIX}HTTP_BASIC_AUTH", + MANAGER_CONFIG_DEFAULTS["HTTP_BASIC_AUTH"], + ), + "POLL_TIME": _get_int( + f"{_PREFIX}POLL_TIME", + MANAGER_CONFIG_DEFAULTS["POLL_TIME"], + ), + "REQUESTS_TIMEOUT": _get_int( + f"{_PREFIX}REQUESTS_TIMEOUT", + MANAGER_CONFIG_DEFAULTS["REQUESTS_TIMEOUT"], + ), + "POLL_INTERVAL": _get_int( + f"{_PREFIX}POLL_INTERVAL", + MANAGER_CONFIG_DEFAULTS["POLL_INTERVAL"], + ), + "LOGIN_POLL_INTERVAL": _get_int( + f"{_PREFIX}LOGIN_POLL_INTERVAL", + MANAGER_CONFIG_DEFAULTS["LOGIN_POLL_INTERVAL"], + ), + "FOLLOW_CLIENT_REDIRECTS": _get_bool( + f"{_PREFIX}FOLLOW_CLIENT_REDIRECTS", + MANAGER_CONFIG_DEFAULTS["FOLLOW_CLIENT_REDIRECTS"], + ), + } + + +def get_container_grader_defaults() -> dict: + """ + Return deployment-wide ContainerGrader defaults from environment variables. + + These values are used when a ``ContainerGrader`` is constructed without + an explicit value for the corresponding parameter. Any value supplied + directly in the conf.d ``KWARGS`` block takes precedence. + """ + return { + "backend": _get_str(f"{_PREFIX}GRADER_BACKEND", "kubernetes"), + "namespace": _get_str(f"{_PREFIX}GRADER_NAMESPACE", "default"), + "cpu_limit": _get_str(f"{_PREFIX}GRADER_CPU_LIMIT", "500m"), + "memory_limit": _get_str(f"{_PREFIX}GRADER_MEMORY_LIMIT", "256Mi"), + "timeout": _get_int(f"{_PREFIX}GRADER_TIMEOUT", 20), + "docker_host_grader_root": _get_str(f"{_PREFIX}DOCKER_HOST_GRADER_ROOT", None), + } diff --git a/xqueue_watcher/grader.py b/xqueue_watcher/grader.py index ed3744d..78de6a4 100644 --- a/xqueue_watcher/grader.py +++ b/xqueue_watcher/grader.py @@ -2,13 +2,13 @@ Implementation of a grader compatible with XServer """ import html -import os import time import json -from path import Path +from pathlib import Path import logging import multiprocessing -from statsd import statsd + +from . import metrics as _metrics def format_errors(errors): @@ -110,7 +110,7 @@ def grade(self, grader_path, grader_config, student_response): def process_item(self, content, queue=None): try: - statsd.increment('xqueuewatcher.process-item') + _metrics.process_item_counter.add(1) body = content['xqueue_body'] files = content['xqueue_files'] @@ -123,25 +123,42 @@ def process_item(self, content, queue=None): except ValueError as err: # If parsing json fails, erroring is fine--something is wrong in the content. # However, for debugging, still want to see what the problem is - statsd.increment('xqueuewatcher.grader_payload_error') + _metrics.grader_payload_error_counter.add(1) self.log.debug(f"error parsing: '{payload}' -- {err}") raise self.log.debug(f"Processing submission, grader payload: {payload}") relative_grader_path = grader_config['grader'] - grader_path = os.path.abspath(self.grader_root / relative_grader_path) + # Reject paths that contain ".." components before resolving to + # avoid symlink edge-cases that could slip past the relative_to() + # check below. Absolute paths are still subject to that check. + if '..' in Path(relative_grader_path).parts: + raise ValueError( + f"Grader path {relative_grader_path!r} contains path traversal sequences." + ) + grader_path = (self.grader_root / relative_grader_path).resolve() + # Guard against path traversal: ensure the resolved path stays within grader_root. + try: + grader_path.relative_to(self.grader_root.resolve()) + except ValueError as exc: + raise ValueError( + f"Grader path {relative_grader_path!r} resolves outside " + f"grader_root {self.grader_root!r}" + ) from exc start = time.time() results = self.grade(grader_path, grader_config, student_response) - statsd.histogram('xqueuewatcher.grading-time', time.time() - start) + elapsed = time.time() - start + _metrics.grading_time_histogram.record(elapsed) + self.log.debug('grading-time seconds=%.3f', elapsed) # Make valid JSON message reply = {'correct': results['correct'], 'score': results['score'], 'msg': self.render_results(results)} - statsd.increment('xqueuewatcher.replies (non-exception)') + _metrics.replies_counter.add(1) except Exception as e: self.log.exception("process_item") if queue: diff --git a/xqueue_watcher/jailedgrader.py b/xqueue_watcher/jailedgrader.py index f3f1c7d..dc3f130 100644 --- a/xqueue_watcher/jailedgrader.py +++ b/xqueue_watcher/jailedgrader.py @@ -1,17 +1,24 @@ """ An implementation of a grader that uses codejail to sandbox submission execution. + +NOTE: This grader requires codejail (an optional dependency) and an AppArmor-enabled +host OS. For Kubernetes deployments, use ContainerGrader instead. """ import codecs import os import sys import importlib +import importlib.util import json import random import gettext -from path import Path -import six +from pathlib import Path -import codejail +try: + import codejail + import codejail.jail_code +except ImportError: + codejail = None from grader_support.gradelib import EndTest from grader_support.graderutil import LANGUAGE @@ -21,21 +28,8 @@ TIMEOUT = 1 -def path_to_six(): - """ - Return the full path to six.py - """ - if any(six.__file__.endswith(suffix) for suffix in ('.pyc', '.pyo')): - # __file__ points to the compiled bytecode in python 2 - return Path(six.__file__[:-1]) - else: - # __file__ points to the .py file in python 3 - return Path(six.__file__) - - SUPPORT_FILES = [ - Path(grader_support.__file__).dirname(), - path_to_six(), + Path(grader_support.__file__).parent, ] @@ -63,8 +57,16 @@ class JailedGrader(Grader): A grader implementation that uses codejail. Instantiate it with grader_root="path/to/graders" and optionally codejail_python="python name" (the name that you used to configure codejail) + + NOTE: Requires codejail (optional dependency) and an AppArmor-enabled host. + For Kubernetes deployments, use ContainerGrader instead. """ def __init__(self, *args, **kwargs): + if codejail is None: + raise RuntimeError( + "codejail is not installed. JailedGrader requires codejail and an " + "AppArmor-enabled host. For containerized deployments use ContainerGrader." + ) self.codejail_python = kwargs.pop("codejail_python", "python") super().__init__(*args, **kwargs) self.locale_dir = self.grader_root / "conf" / "locale" @@ -81,7 +83,7 @@ def _run(self, grader_path, thecode, seed): if self.locale_dir.exists(): files.append(self.locale_dir) extra_files = [('submission.py', thecode.encode('utf-8'))] - argv = ["-m", "grader_support.run", Path(grader_path).basename(), 'submission.py', seed] + argv = ["-B", "-m", "grader_support.run", Path(grader_path).name, 'submission.py', seed] r = codejail.jail_code.jail_code(self.codejail_python, files=files, extra_files=extra_files, argv=argv) return r @@ -115,15 +117,16 @@ def grade(self, grader_path, grader_config, submission): self._enable_i18n(grader_config.get("lang", LANGUAGE)) - answer_path = Path(grader_path).dirname() / 'answer.py' + answer_path = Path(grader_path).parent / 'answer.py' with open(answer_path, 'rb') as f: answer = f.read().decode('utf-8') # Import the grader, straight from the original file. (It probably isn't in # sys.path, and we may be in a long running gunicorn process, so we don't # want to add stuff to sys.path either.) - sf_loader = importlib.machinery.SourceFileLoader("grader_module", str(grader_path)) - grader_module = sf_loader.load_module() + spec = importlib.util.spec_from_file_location("grader_module", str(grader_path)) + grader_module = importlib.util.module_from_spec(spec) + spec.loader.exec_module(grader_module) grader = grader_module.grader # Preprocess for grader-specified errors @@ -279,8 +282,8 @@ def main(args): # pragma: no cover submission = f.read().decode('utf-8') grader_config = {"lang": "eo"} - grader_path = path(grader_path).abspath() - g = JailedGrader(grader_root=grader_path.dirname().parent.parent) + grader_path = Path(grader_path).resolve() + g = JailedGrader(grader_root=grader_path.parent.parent.parent) pprint(g.grade(grader_path, grader_config, submission)) diff --git a/xqueue_watcher/manager.py b/xqueue_watcher/manager.py index ad9c60a..788200b 100644 --- a/xqueue_watcher/manager.py +++ b/xqueue_watcher/manager.py @@ -5,14 +5,19 @@ import json import logging import logging.config -from path import Path +from pathlib import Path import signal import sys import time -from codejail import jail_code +try: + from codejail import jail_code as _codejail_jail_code +except ImportError: + _codejail_jail_code = None -from .settings import get_manager_config_values, MANAGER_CONFIG_DEFAULTS +from .settings import get_manager_config_values, get_xqueue_servers, MANAGER_CONFIG_DEFAULTS +from .metrics import configure_metrics +from .env_settings import configure_logging class Manager: @@ -23,18 +28,45 @@ def __init__(self): self.clients = [] self.log = logging self.manager_config = MANAGER_CONFIG_DEFAULTS.copy() + self.xqueue_servers = {} def client_from_config(self, queue_name, watcher_config): """ Return an XQueueClient from the configuration object. + + Queue configs may specify a ``SERVER_REF`` key whose value is the name + of a server defined in ``xqueue_servers.json``. When present, the + referenced server's ``SERVER`` and ``AUTH`` values are used and the + queue config must not also supply ``SERVER`` or ``AUTH`` directly. """ from . import client + server_ref = watcher_config.get('SERVER_REF') + if server_ref is not None: + if 'SERVER' in watcher_config or 'AUTH' in watcher_config: + raise ValueError( + f"Queue '{queue_name}': 'SERVER_REF' cannot be used together " + "with 'SERVER' or 'AUTH'. Remove 'SERVER' and 'AUTH' when " + "using a named server reference." + ) + if server_ref not in self.xqueue_servers: + known = list(self.xqueue_servers) + raise ValueError( + f"Queue '{queue_name}': unknown SERVER_REF '{server_ref}'. " + f"Known server names: {known}" + ) + server_config = self.xqueue_servers[server_ref] + xqueue_server = server_config['SERVER'] + xqueue_auth = server_config['AUTH'] + else: + xqueue_server = watcher_config.get('SERVER', 'http://localhost:18040') + xqueue_auth = watcher_config.get('AUTH', (None, None)) + klass = getattr(client, watcher_config.get('CLASS', 'XQueueClientThread')) watcher = klass( - queue_name, - xqueue_server=watcher_config.get('SERVER', 'http://localhost:18040'), - xqueue_auth=watcher_config.get('AUTH', (None, None)), + queue_name=watcher_config.get('NAME_OVERRIDE', None) or queue_name, + xqueue_server=xqueue_server, + xqueue_auth=xqueue_auth, http_basic_auth=self.manager_config['HTTP_BASIC_AUTH'], requests_timeout=self.manager_config['REQUESTS_TIMEOUT'], poll_interval=self.manager_config['POLL_INTERVAL'], @@ -87,20 +119,25 @@ def configure_from_directory(self, directory): with open(log_config) as config: logging.config.dictConfig(json.load(config)) else: - logging.basicConfig(level="DEBUG") + configure_logging() self.log = logging.getLogger('xqueue_watcher.manager') + configure_metrics() + app_config_path = directory / 'xqwatcher.json' self.manager_config = get_manager_config_values(app_config_path) + servers_config_path = directory / 'xqueue_servers.json' + self.xqueue_servers = get_xqueue_servers(servers_config_path) + confd = directory / 'conf.d' - for watcher in confd.files('*.json'): + for watcher in sorted(confd.glob('*.json')): with open(watcher) as queue_config: self.configure(json.load(queue_config)) def enable_codejail(self, codejail_config): """ - Enable codejail for the process. + Enable codejail for the process (legacy AppArmor-based sandbox). codejail_config is a dict like this: { "name": "python", @@ -114,13 +151,18 @@ def enable_codejail(self, codejail_config): limits are optional user defaults to the current user """ + if _codejail_jail_code is None: + raise RuntimeError( + "codejail is not installed. Cannot configure AppArmor-based sandboxing. " + "Use ContainerGrader for containerized deployments." + ) name = codejail_config["name"] bin_path = codejail_config['bin_path'] user = codejail_config.get('user', getpass.getuser()) - jail_code.configure(name, bin_path, user=user) + _codejail_jail_code.configure(name, bin_path, user=user) limits = codejail_config.get("limits", {}) for limit_name, value in limits.items(): - jail_code.set_limit(limit_name, value) + _codejail_jail_code.set_limit(limit_name, value) self.log.info("configured codejail -> %s %s %s", name, bin_path, user) return name diff --git a/xqueue_watcher/metrics.py b/xqueue_watcher/metrics.py new file mode 100644 index 0000000..437f9d4 --- /dev/null +++ b/xqueue_watcher/metrics.py @@ -0,0 +1,78 @@ +""" +OpenTelemetry metrics for xqueue-watcher. + +Call :func:`configure_metrics` once at process startup (before the first +submission is processed). All configuration is read from the standard +OpenTelemetry environment variables so no application-level config files are +needed: + +``OTEL_EXPORTER_OTLP_ENDPOINT`` + OTLP collector endpoint, e.g. ``http://otel-collector:4318``. + When absent or empty, metrics are recorded in-process but not exported. +``OTEL_SERVICE_NAME`` + Service name attached to every metric (default: ``xqueue-watcher``). +``OTEL_RESOURCE_ATTRIBUTES`` + Additional resource attributes as ``key=value,...`` pairs. Parsed + automatically by the OpenTelemetry SDK's ``Resource.create()`` call — + no custom parsing is needed in this module. +""" + +import os + +from opentelemetry import metrics +from opentelemetry.sdk.metrics import MeterProvider +from opentelemetry.sdk.metrics.export import PeriodicExportingMetricReader +from opentelemetry.sdk.resources import Resource + +_METER_NAME = "xqueue_watcher" +_DEFAULT_SERVICE_NAME = "xqueue-watcher" + + +def _build_meter_provider() -> MeterProvider: + resource = Resource.create( + {"service.name": os.environ.get("OTEL_SERVICE_NAME", "").strip() or _DEFAULT_SERVICE_NAME} + ) + readers = [] + if os.environ.get("OTEL_EXPORTER_OTLP_ENDPOINT", "").strip(): + from opentelemetry.exporter.otlp.proto.http.metric_exporter import OTLPMetricExporter + readers.append(PeriodicExportingMetricReader(OTLPMetricExporter())) + return MeterProvider(resource=resource, metric_readers=readers) + + +def configure_metrics() -> None: + """Configure the global OTel MeterProvider from environment variables.""" + metrics.set_meter_provider(_build_meter_provider()) + + +# --------------------------------------------------------------------------- +# Instruments +# +# Created at module level against the global proxy meter. The OTel proxy +# delegates transparently to whichever MeterProvider is active, so these +# instruments work correctly whether configure_metrics() has been called or +# not (unmeasured data simply goes to the no-op provider until the real +# provider is installed). +# --------------------------------------------------------------------------- + +_meter = metrics.get_meter(_METER_NAME) + +process_item_counter = _meter.create_counter( + "xqueuewatcher.process_item", + description="Number of grading submissions received.", +) + +grader_payload_error_counter = _meter.create_counter( + "xqueuewatcher.grader_payload_error", + description="Number of submissions whose grader_payload could not be parsed.", +) + +grading_time_histogram = _meter.create_histogram( + "xqueuewatcher.grading_time", + unit="s", + description="Wall-clock time in seconds spent grading a single submission.", +) + +replies_counter = _meter.create_counter( + "xqueuewatcher.replies", + description="Number of successful (non-exception) grading replies sent.", +) diff --git a/xqueue_watcher/settings.py b/xqueue_watcher/settings.py index 44e0555..3c378cc 100644 --- a/xqueue_watcher/settings.py +++ b/xqueue_watcher/settings.py @@ -19,3 +19,26 @@ def get_manager_config_values(app_config_path): config_key: config_tokens.get(config_key, default_config_value) for config_key, default_config_value in MANAGER_CONFIG_DEFAULTS.items() } + + +def get_xqueue_servers(servers_config_path): + """ + Load named XQueue server definitions from xqueue_servers.json. + + Returns a dict mapping server names to their connection config dicts, + each containing 'SERVER' (URL string) and 'AUTH' ([username, password]). + Returns an empty dict if the file does not exist. + + Raises ValueError if any server entry is missing required keys. + """ + if not servers_config_path.exists(): + return {} + with open(servers_config_path) as config: + servers = json.load(config) + for name, server_config in servers.items(): + missing = [k for k in ('SERVER', 'AUTH') if k not in server_config] + if missing: + raise ValueError( + f"xqueue_servers.json: server '{name}' is missing required key(s): {missing}" + ) + return servers From 8bba091b29458fa16ecd5c1c9d6769007e34d9d8 Mon Sep 17 00:00:00 2001 From: Tobias Macey Date: Tue, 24 Mar 2026 14:56:02 -0400 Subject: [PATCH 2/4] fix: remove invalid pids container resource limit Kubernetes does not support 'pids' as a container-level resource limit. Setting it causes a 422 Unprocessable Entity error when creating grading Jobs. PID limits must instead be enforced at the namespace level via a LimitRange or at the node level via kubelet --pod-pids-limit. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- xqueue_watcher/containergrader.py | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/xqueue_watcher/containergrader.py b/xqueue_watcher/containergrader.py index 7627f3c..50a238c 100644 --- a/xqueue_watcher/containergrader.py +++ b/xqueue_watcher/containergrader.py @@ -12,7 +12,8 @@ - RuntimeDefault seccomp profile (restricts available syscalls) - /tmp emptyDir with a size cap (prevents disk exhaustion) - No service-account token auto-mounted - - CPU, memory, and PID limits to prevent resource exhaustion + - CPU and memory limits to prevent resource exhaustion + (PID limits must be enforced via a namespace LimitRange or kubelet --pod-pids-limit) Operators should also ensure: - The grader namespace enforces the Kubernetes "restricted" Pod Security Standard @@ -410,7 +411,6 @@ def _build_k8s_job(self, job_name, grader_path, code, seed, grader_config=None): limits={ "cpu": self.cpu_limit, "memory": self.memory_limit, - "pids": "256", }, requests={ "cpu": "100m", From 1bb27c0a1f82983cb850d143733af9a33b0670bf Mon Sep 17 00:00:00 2001 From: Tobias Macey Date: Mon, 30 Mar 2026 14:03:30 -0400 Subject: [PATCH 3/4] fix: address feanil's PR #122 review feedback - ci.yml: expand Python matrix to 3.12/3.13/3.14 - publish-grader-base-image.yml: update registry to openedx namespace, drop dev branch triggers, replace single build with 3-version matrix, tag images as py-latest (+ latest for newest) - Dockerfile.base: parameterise Python version via ARG PYTHON_VERSION, drop hardcoded 3.11 - tests/test_container_grader.py: add TestParseMemoryBytes (10 tests) covering IEC binary, SI decimal, and plain integer inputs Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- .github/workflows/ci.yml | 2 +- .../workflows/publish-grader-base-image.yml | 35 ++++++++--------- grader_support/Dockerfile.base | 3 +- tests/test_container_grader.py | 39 +++++++++++++++++++ 4 files changed, 59 insertions(+), 20 deletions(-) diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index 8f0f876..4f4ebec 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -16,7 +16,7 @@ jobs: matrix: os: - ubuntu-latest - python-version: ['3.12', '3.13'] + python-version: ['3.12', '3.13', '3.14'] steps: - uses: actions/checkout@v4 diff --git a/.github/workflows/publish-grader-base-image.yml b/.github/workflows/publish-grader-base-image.yml index 3eaac4c..4e11697 100644 --- a/.github/workflows/publish-grader-base-image.yml +++ b/.github/workflows/publish-grader-base-image.yml @@ -1,14 +1,16 @@ name: Publish grader base image # Builds grader_support/Dockerfile.base and pushes to: -# - GHCR: ghcr.io/mitodl/xqueue-watcher-grader-base +# - GHCR: ghcr.io/openedx/xqueue-watcher-grader-base +# +# One image is published per supported Python version, tagged as: +# py-latest (e.g. py3.12-latest, py3.13-latest, py3.14-latest) +# The newest version is also tagged as `latest`. on: push: branches: - master - - feat/xqwatcher-kubernetes-migration - - chore/migrate-to-uv-and-k8s-container-grader paths: - "grader_support/**" schedule: @@ -17,12 +19,17 @@ on: workflow_dispatch: env: - IMAGE_NAME: mitodl/xqueue-watcher-grader-base + IMAGE_NAME: openedx/xqueue-watcher-grader-base + # Newest version also receives the `latest` tag + LATEST_PYTHON: "3.14" jobs: build-and-push: - name: Build and push grader base image + name: Build and push grader base image (Python ${{ matrix.python-version }}) runs-on: ubuntu-latest + strategy: + matrix: + python-version: ['3.12', '3.13', '3.14'] permissions: contents: read packages: write @@ -44,17 +51,6 @@ jobs: - name: Set up Docker Buildx uses: docker/setup-buildx-action@v3 - - name: Extract image metadata - id: meta - uses: docker/metadata-action@v5 - with: - images: | - ghcr.io/${{ env.IMAGE_NAME }} - tags: | - type=raw,value=latest,enable={{is_default_branch}} - type=raw,value=latest,enable=${{ github.ref_name == 'chore/migrate-to-uv-and-k8s-container-grader' || github.ref_name == 'feat/xqwatcher-kubernetes-migration' }} - type=sha,format=short - - name: Build and push uses: docker/build-push-action@v6 with: @@ -62,7 +58,10 @@ jobs: file: grader_support/Dockerfile.base platforms: linux/amd64,linux/arm64 push: true - tags: ${{ steps.meta.outputs.tags }} - labels: ${{ steps.meta.outputs.labels }} + build-args: | + PYTHON_VERSION=${{ matrix.python-version }} + tags: | + ghcr.io/${{ env.IMAGE_NAME }}:py${{ matrix.python-version }}-latest + ${{ matrix.python-version == env.LATEST_PYTHON && format('ghcr.io/{0}:latest', env.IMAGE_NAME) || '' }} cache-from: type=gha cache-to: type=gha,mode=max diff --git a/grader_support/Dockerfile.base b/grader_support/Dockerfile.base index 58e3690..53358cc 100644 --- a/grader_support/Dockerfile.base +++ b/grader_support/Dockerfile.base @@ -1,4 +1,5 @@ -FROM python:3.11-slim AS grader-base +ARG PYTHON_VERSION=3.12 +FROM python:${PYTHON_VERSION}-slim AS grader-base # Create a non-root user for running student code RUN useradd -m -u 1000 --shell /bin/false grader diff --git a/tests/test_container_grader.py b/tests/test_container_grader.py index 87a26df..ea09075 100644 --- a/tests/test_container_grader.py +++ b/tests/test_container_grader.py @@ -40,6 +40,45 @@ def test_fractional_cores(self): assert _parse_cpu_millis("0.5") == 500.0 +# --------------------------------------------------------------------------- +# _parse_memory_bytes +# --------------------------------------------------------------------------- + +class TestParseMemoryBytes: + # IEC binary suffixes + def test_mebibytes(self): + assert _parse_memory_bytes("256Mi") == 256 * 1024**2 + + def test_gibibytes(self): + assert _parse_memory_bytes("1Gi") == 1024**3 + + def test_kibibytes(self): + assert _parse_memory_bytes("512Ki") == 512 * 1024 + + def test_tebibytes(self): + assert _parse_memory_bytes("1Ti") == 1024**4 + + # SI decimal suffixes + def test_megabytes(self): + assert _parse_memory_bytes("512M") == 512 * 1000**2 + + def test_gigabytes(self): + assert _parse_memory_bytes("1G") == 1000**3 + + def test_kilobytes(self): + assert _parse_memory_bytes("1K") == 1000 + + def test_terabytes(self): + assert _parse_memory_bytes("1T") == 1000**4 + + # Plain integers + def test_plain_integer_string(self): + assert _parse_memory_bytes("1024") == 1024 + + def test_zero(self): + assert _parse_memory_bytes("0") == 0 + + # --------------------------------------------------------------------------- # ContainerGrader.__init__ # --------------------------------------------------------------------------- From 5704b048124346fb599b497abe355d8bba3b147f Mon Sep 17 00:00:00 2001 From: Tobias Macey Date: Wed, 1 Apr 2026 17:19:43 -0400 Subject: [PATCH 4/4] docs: add operator, course-team, and grader-interface guides - docs/operators.md: installation, all configuration options, environment variables, Docker/Kubernetes deployment, security posture, metrics, and logging reference for platform engineers. - docs/course-teams.md: grader authoring guide including three local-testing workflows (CLI entrypoint, docker run, docker-compose) that require no running edX environment. - docs/grader-interface.md: full interface reference covering the watcher-side Grader base class, grade() return schema, all built-in implementations, the grader-container protocol (env-in / JSON-out) for non-Python language support, and the grader_support.gradelib API. - README.md: add documentation table at top; replace TODO with link to the new logging.json section in the operator guide. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- README.md | 10 +- docs/course-teams.md | 590 +++++++++++++++++++++++++++++++++++++ docs/grader-interface.md | 612 +++++++++++++++++++++++++++++++++++++++ docs/operators.md | 461 +++++++++++++++++++++++++++++ 4 files changed, 1672 insertions(+), 1 deletion(-) create mode 100644 docs/course-teams.md create mode 100644 docs/grader-interface.md create mode 100644 docs/operators.md diff --git a/README.md b/README.md index 5ffe2a1..78e063a 100644 --- a/README.md +++ b/README.md @@ -3,6 +3,14 @@ xqueue_watcher This is an implementation of a polling [XQueue](https://github.com/openedx/xqueue) client and grader. +## Documentation + +| Guide | Audience | +|-------|----------| +| [Operator Guide](docs/operators.md) | DevOps / platform engineers deploying xqueue-watcher | +| [Course Team Guide](docs/course-teams.md) | Course teams writing graders and testing locally | +| [Grader Interface Reference](docs/grader-interface.md) | Developers extending or customising the grading pipeline | + Overview ======== @@ -149,7 +157,7 @@ The course configuration JSON file in `conf.d` should have the following structu * `KWARGS`: optional keyword arguments to apply during instantiation * `grader_root`: path to the course directory, eg /path/to/my-course -> TODO: document logging.json +See the [Operator Guide](docs/operators.md#loggingjson) for `logging.json` documentation. Submissions Handler =================== diff --git a/docs/course-teams.md b/docs/course-teams.md new file mode 100644 index 0000000..e25dc31 --- /dev/null +++ b/docs/course-teams.md @@ -0,0 +1,590 @@ +# Course Team Guide: Writing and Testing Graders + +This guide explains how to write grader scripts for xqueue-watcher-based courses and +how to test them locally — **without a running Open edX instance or XQueue service**. + +## Table of Contents + +- [How grading works](#how-grading-works) +- [Grader directory structure](#grader-directory-structure) +- [Writing a grader.py](#writing-a-graderpy) + - [Available utilities (grader_support)](#available-utilities-grader_support) + - [Minimal example](#minimal-example) + - [Testing function calls](#testing-function-calls) + - [Testing script-style code](#testing-script-style-code) + - [Input validation](#input-validation) + - [Custom test comparisons](#custom-test-comparisons) +- [Writing an answer.py](#writing-an-answerpy) +- [Configuring the problem in Studio](#configuring-the-problem-in-studio) +- [Local testing without edX](#local-testing-without-edx) + - [Option 1: Run the grading pipeline directly](#option-1-run-the-grading-pipeline-directly) + - [Option 2: Run inside a grader container (Docker)](#option-2-run-inside-a-grader-container-docker) + - [Option 3: Full local stack with docker-compose](#option-3-full-local-stack-with-docker-compose) +- [Building a grader image](#building-a-grader-image) +- [Troubleshooting](#troubleshooting) + + +--- + +## How grading works + +When a student submits code for a programming exercise, the following happens: + +1. The Open edX LMS sends the submission to XQueue. +2. xqueue-watcher picks it up, extracts `grader_payload` from the submission, and + resolves it to a `grader.py` file on the server. +3. The grader runs the **staff answer** (`answer.py`) and the **student submission** + through the test suite defined in `grader.py`. +4. The outputs of each test are compared. Tests where the outputs match are marked + correct. +5. A score and HTML feedback message are returned to the student. + +Both `grader.py` and `answer.py` are authored by the course team. + + +--- + +## Grader directory structure + +Each problem that uses xqueue-watcher needs a folder containing: + +``` +my-course/ +└── unit-2/ + └── exercise-3/ + ├── grader.py # defines the tests + └── answer.py # the reference (correct) solution +``` + +The path `unit-2/exercise-3/grader.py` is what goes into the Studio grader payload +(see [Configuring the problem in Studio](#configuring-the-problem-in-studio)). + + +--- + +## Writing a grader.py + +`grader.py` defines a **`grader`** object — an instance of `grader_support.gradelib.Grader`. +The grader collects tests, preprocessors, and input validators. When a submission +arrives, xqueue-watcher calls these in order to produce a result. + +> **Important**: `grader.py` must assign to a module-level variable named exactly +> `grader`. xqueue-watcher imports the file and reads `grader_module.grader`. + +### Available utilities (grader_support) + +`grader_support` is bundled with xqueue-watcher and is always available inside grading +containers. Import it at the top of your `grader.py`: + +```python +from grader_support import gradelib +``` + +Key classes and functions: + +| Name | Description | +|------|-------------| +| `gradelib.Grader` | Base class for the grader object. Add tests, preprocessors, and input checks to it. | +| `gradelib.Test` | Wrap a test function with descriptions and an optional custom comparator. | +| `gradelib.InvokeStudentFunctionTest` | Convenience: call a named function from the student submission with given arguments. | +| `gradelib.ExecWrappedStudentCodeTest` | Convenience: exec the student code in a namespace and capture stdout. | +| `gradelib.wrap_in_string` | Preprocessor: wrap raw code in a string so it can be exec'd multiple times. | +| `gradelib.fix_line_endings` | Preprocessor: normalise `\r\n` to `\n` (installed by default). | +| `gradelib.required_substring` | Input check factory: fail if a required string is missing. | +| `gradelib.prohibited_substring` | Input check factory: fail if a forbidden string is present. | +| `gradelib.EndTest` | Exception: raise inside a test to end it early with an error message. | + +### Minimal example + +```python +# grader.py — test that the student's `add` function returns the right value +from grader_support import gradelib + +grader = gradelib.Grader() +grader.add_test(gradelib.InvokeStudentFunctionTest('add', [2, 3])) +grader.add_test(gradelib.InvokeStudentFunctionTest('add', [-1, 1])) +grader.add_test(gradelib.InvokeStudentFunctionTest('add', [0, 0])) +``` + +The matching `answer.py`: + +```python +# answer.py +def add(a, b): + return a + b +``` + +A student submission that defines `add` correctly will pass all three tests. + +### Testing function calls + +`InvokeStudentFunctionTest` calls a function by name from the submission module and +prints the return value to stdout. The test passes if the output matches the staff +answer's output for the same call. + +```python +grader.add_test(gradelib.InvokeStudentFunctionTest('function_name', [arg1, arg2, ...])) +``` + +You can also write a test function directly: + +```python +def test_my_function(submission_module): + result = submission_module.my_function(42) + print(result) # printed output is compared to the staff answer's output + +grader.add_test(gradelib.Test(test_my_function, "Test my_function(42)")) +``` + +### Testing script-style code + +For early exercises where students write top-level code rather than functions, use the +`wrap_in_string` preprocessor together with `ExecWrappedStudentCodeTest`: + +```python +from grader_support import gradelib + +grader = gradelib.Grader() +grader.add_preprocessor(gradelib.wrap_in_string) +grader.add_test(gradelib.ExecWrappedStudentCodeTest({}, "Run the code and check stdout")) +``` + +The `answer.py` for this style contains the same top-level code: + +```python +# answer.py — expected output when the code runs +x = 10 +print(x * 2) +``` + +### Input validation + +Input checks run **before** the submission is executed. They are safe to use because +they only inspect the source text. A failed check returns an error message to the +student and stops grading. + +```python +grader.add_input_check(gradelib.required_substring('def solve(')) +grader.add_input_check(gradelib.prohibited_substring('import os')) +``` + +You can write a custom check: + +```python +def must_use_recursion(code): + if 'def ' not in code: + return "Your solution must define a function." + return None # None means the check passed + +grader.add_input_check(must_use_recursion) +``` + +### Custom test comparisons + +By default, test results are compared with simple string equality. Override +`compare_results` on a `Test` instance or via `add_tests_from_class` for custom logic: + +```python +def compare_floats(expected, actual): + try: + return abs(float(expected) - float(actual)) < 1e-6 + except ValueError: + return False + +grader.add_test(gradelib.Test( + lambda mod: print(mod.compute_pi()), + "Test compute_pi()", + compare=compare_floats, +)) +``` + +The `compare_results(expected, actual)` function receives the stdout output of each run +as strings and must return `True` (pass) or `False` (fail). + +You can also raise `gradelib.EndTest` inside a test to produce a custom error message +for the student: + +```python +def test_sorted(submission_module): + result = submission_module.my_sort([3, 1, 2]) + if not isinstance(result, list): + raise gradelib.EndTest("my_sort should return a list, not {!r}".format(type(result).__name__)) + print(result) +``` + + +--- + +## Writing an answer.py + +`answer.py` is the **reference solution**. It is run through the same test suite as +the student submission; its output becomes the "expected" result for each test. + +Rules: +- It must be in the same directory as `grader.py`. +- It must produce correct output for every test in `grader.py`. +- It is never shown to students — it runs inside the grading container only. + +```python +# answer.py +def add(a, b): + return a + b +``` + + +--- + +## Configuring the problem in Studio + +In the Open edX Studio problem editor, set the **grader payload** to a JSON object +containing the relative path to your `grader.py` from the `grader_root` configured for +your queue: + +```json +{"grader": "unit-2/exercise-3/grader.py"} +``` + +Additional fields in the grader payload are passed to the grader as `grader_config` and +can be read inside `grader.py` if needed: + +```json +{ + "grader": "unit-2/exercise-3/grader.py", + "lang": "en", + "hide_output": false, + "skip_grader": false +} +``` + +| Field | Description | +|-------|-------------| +| `grader` | **Required.** Relative path to `grader.py` from `grader_root`. | +| `lang` | Language code for i18n in feedback messages (default: `en`). | +| `hide_output` | If `true`, test output details are hidden from the student (default: `false`). | +| `skip_grader` | If `true`, always marks the submission correct with a full score. Useful for problems where automated grading is not feasible (default: `false`). | + + +--- + +## Local testing without edX + +You can run the complete grading pipeline locally without an Open edX instance, an +XQueue service, or any network connectivity. Pick the option that fits your workflow. + +### Option 1: Run the grading pipeline directly + +This is the fastest approach. Install xqueue-watcher with its dependencies, then call +`grader_support.entrypoint` directly from the command line: + +```bash +# Install xqueue-watcher (once) +cd xqueue-watcher/ +uv sync # or: pip install -e . + +# Add grader_support to your Python path +export PYTHONPATH="$PYTHONPATH:$(pwd)" +``` + +Create a file containing the student submission you want to test, e.g. +`/tmp/student_submission.py`: + +```python +def add(a, b): + return a + b +``` + +Then run the entrypoint, passing the `SUBMISSION_CODE` environment variable: + +```bash +SUBMISSION_CODE="$(cat /tmp/student_submission.py)" \ + python -m grader_support.entrypoint \ + /path/to/my-course/unit-2/exercise-3/grader.py \ + 42 +``` + +The second argument (`42`) is the random seed; any integer works for testing. + +The output is a JSON object: + +```json +{ + "errors": [], + "tests": [ + ["Test: add 2 3", "", true, "5\n", "5\n"] + ], + "correct": true, + "score": 1.0 +} +``` + +Each entry in `tests` is `[short_description, long_description, correct, expected_output, actual_output]`. + +**Debugging:** set `GRADER_DEBUG=1` to see step-by-step trace output on stderr: + +```bash +GRADER_DEBUG=1 SUBMISSION_CODE="def add(a,b): return a+b" \ + python -m grader_support.entrypoint \ + /path/to/exercise-3/grader.py 42 +``` + +**Testing an incorrect submission:** + +```bash +SUBMISSION_CODE="def add(a, b): return a - b" \ + python -m grader_support.entrypoint \ + /path/to/exercise-3/grader.py 42 +``` + +```json +{ + "errors": [], + "tests": [ + ["Test: add 2 3", "", false, "5\n", "-1\n"] + ], + "correct": false, + "score": 0.0 +} +``` + +**Testing input validation errors:** + +```bash +SUBMISSION_CODE="x = 1" \ + python -m grader_support.entrypoint \ + /path/to/exercise-3/grader.py 42 +``` + +If the `grader.py` requires a function definition the output will contain an `errors` +entry instead of running any tests. + + +### Option 2: Run inside a grader container (Docker) + +If your grader image has dependencies beyond the standard library, or you want to test +exactly the environment that runs in production, build and run the grader container +locally. + +**Step 1 — Build the base image** (once per xqueue-watcher checkout): + +```bash +docker build \ + -f grader_support/Dockerfile.base \ + -t grader-base:local \ + . +``` + +**Step 2 — Write your course-specific `Dockerfile`**: + +```dockerfile +FROM grader-base:local + +# Copy your graders into the image +COPY my-course/ /grader/my-course/ + +# Install any course-specific Python dependencies +# COPY requirements.txt . +# RUN pip install -r requirements.txt +``` + +**Step 3 — Build your grader image**: + +```bash +docker build -t my-course-grader:local . +``` + +**Step 4 — Run a grading job**: + +```bash +docker run --rm \ + -e SUBMISSION_CODE="def add(a, b): return a + b" \ + my-course-grader:local \ + /grader/my-course/unit-2/exercise-3/grader.py 42 +``` + +The output is the same JSON as Option 1. + +**Debugging inside the container**: + +```bash +docker run --rm -it \ + -e SUBMISSION_CODE="def add(a, b): return a + b" \ + -e GRADER_DEBUG=1 \ + my-course-grader:local \ + /grader/my-course/unit-2/exercise-3/grader.py 42 +``` + +**Iterating on grader scripts without rebuilding**: + +During development, bind-mount your grader directory so changes take effect immediately: + +```bash +docker run --rm \ + -e SUBMISSION_CODE="def add(a, b): return a + b" \ + -v "$(pwd)/my-course:/grader/my-course:ro" \ + my-course-grader:local \ + /grader/my-course/unit-2/exercise-3/grader.py 42 +``` + + +### Option 3: Full local stack with docker-compose + +Use the included `docker-compose.yml` to run a complete local environment (XQueue + +xqueue-watcher + a sample grader) and test the full submission flow end-to-end. + +**Step 1 — Set the host-side grader root path**: + +The Docker backend needs to know the absolute host-side path to your grader data (see +the [Operators Guide](operators.md#docker--docker-compose) for why): + +```bash +export XQWATCHER_DOCKER_HOST_GRADER_ROOT="$(pwd)/data" +``` + +**Step 2 — Put your graders in `data/`**: + +``` +data/ +└── unit-2/ + └── exercise-3/ + ├── grader.py + └── answer.py +``` + +**Step 3 — Update `conf.d/600.json`** to point at your grader: + +```json +{ + "test-123": { + "SERVER_REF": "default", + "CONNECTIONS": 1, + "HANDLERS": [ + { + "HANDLER": "xqueue_watcher.containergrader.ContainerGrader", + "KWARGS": { + "grader_root": "/graders/", + "image": "grader-base:local", + "backend": "docker", + "timeout": 20 + } + } + ] + } +} +``` + +**Step 4 — Start the stack**: + +```bash +docker compose build # builds grader-base:local +docker compose up +``` + +**Step 5 — Submit a test submission via the XQueue API**: + +```bash +# Authenticate +curl -c /tmp/xqueue-cookie.txt \ + -X POST http://localhost:18040/xqueue/login/ \ + -d 'username=lms&password=password' + +# Push a submission +curl -b /tmp/xqueue-cookie.txt \ + -X POST http://localhost:18040/xqueue/submit/ \ + -H 'Content-Type: application/json' \ + -d '{ + "xqueue_header": "{\"lms_callback_url\": \"http://host.docker.internal:8000/\", \"lms_key\": \"test\", \"queue_name\": \"test-123\"}", + "xqueue_body": "{\"student_response\": \"def add(a,b): return a+b\", \"grader_payload\": \"{\\\"grader\\\": \\\"unit-2/exercise-3/grader.py\\\"}\"}" + }' +``` + +xqueue-watcher will pick up the submission, run it through the grader container, and +return the result to XQueue. Watch the logs with: + +```bash +docker compose logs -f xqueue-watcher +``` + + +--- + +## Building a grader image + +When deploying to Kubernetes, grader scripts and dependencies are baked into a +course-specific Docker image that extends `grader-base`. + +A minimal course `Dockerfile`: + +```dockerfile +FROM grader-base:local # replace with your registry path in production + +# Copy grader scripts +COPY my-course/ /grader/my-course/ + +# Install course-specific Python packages +RUN pip install --no-cache-dir numpy scipy +``` + +The `grader-base` image sets the entrypoint to `python -m grader_support.entrypoint`, +so no `CMD` or `ENTRYPOINT` override is needed. + +The image is used as-is by `ContainerGrader`; the watcher passes the submission code +via the `SUBMISSION_CODE` environment variable and the path to `grader.py` as an +argument. + +**Image tagging for production**: use digest-pinned references +(`registry.example.com/my-course-grader@sha256:…`) in your `conf.d` configuration, or +enable `poll_image_digest: true` so the watcher resolves the latest digest +automatically. + + +--- + +## Troubleshooting + +**"There was a problem running the staff solution"** + +This error means `answer.py` itself failed. Run the grader locally with `GRADER_DEBUG=1` +and look for a Python traceback in stderr. Common causes: +- `answer.py` has a syntax error or raises an exception. +- A required dependency is missing from the grader image. +- The `grader.py` test exercises functionality that `answer.py` does not implement. + +**"We couldn't run your solution"** + +The student submission raised an unhandled exception. With `GRADER_DEBUG=1` you can +see the exception detail in stderr. The student will see only a generic error message. + +**"Something went wrong: different numbers of tests ran"** + +The student submission caused a different number of tests to run than the staff answer. +This usually means the student submission crashed partway through. Investigate with +`GRADER_DEBUG=1`. + +**Grader times out** + +The default timeout is 20 seconds. Increase it in your conf.d `KWARGS`: + +```json +"KWARGS": { + "timeout": 60 +} +``` + +For local Docker testing, timeouts are applied by the Docker backend the same way as +the Kubernetes backend. + +**Import errors in grader.py or answer.py** + +Ensure all required packages are installed in the grader image. Test the image +interactively: + +```bash +docker run --rm -it --entrypoint python my-course-grader:local +>>> import numpy # verify the package is available +``` + +**Container exits immediately with no output** + +Check that `SUBMISSION_CODE` is set and non-empty. If `SUBMISSION_CODE` is empty the +entrypoint may produce an empty result or an error. Run with `GRADER_DEBUG=1` to see +what the entrypoint received. diff --git a/docs/grader-interface.md b/docs/grader-interface.md new file mode 100644 index 0000000..523848a --- /dev/null +++ b/docs/grader-interface.md @@ -0,0 +1,612 @@ +# Grader Interface Reference + +This document describes the interfaces that xqueue-watcher uses to grade student +submissions. Understanding this interface is necessary if you want to: + +- Implement a custom grader in Python +- Grade submissions written in a language other than Python +- Extend the built-in grading pipeline + +## Table of Contents + +- [Architecture overview](#architecture-overview) +- [The watcher-side Grader interface](#the-watcher-side-grader-interface) + - [Grader base class](#grader-base-class) + - [Return value of grade()](#return-value-of-grade) + - [Submission payload format](#submission-payload-format) +- [Built-in grader implementations](#built-in-grader-implementations) + - [Grader (base, no sandbox)](#grader-base-no-sandbox) + - [JailedGrader (AppArmor sandbox)](#jailedgrader-apparmor-sandbox) + - [ContainerGrader (Docker / Kubernetes)](#containergrader-docker--kubernetes) +- [Implementing a custom Python grader](#implementing-a-custom-python-grader) +- [Supporting other languages](#supporting-other-languages) + - [Strategy 1: Custom watcher-side Grader subclass](#strategy-1-custom-watcher-side-grader-subclass) + - [Strategy 2: Custom grader container image](#strategy-2-custom-grader-container-image) +- [The grader container protocol](#the-grader-container-protocol) + - [Inputs](#inputs) + - [Output](#output) + - [Exit codes](#exit-codes) +- [The Python grading pipeline (grader_support)](#the-python-grading-pipeline-grader_support) + - [grader_support.gradelib.Grader](#grader_supportgradelibgrader) + - [grader_support.gradelib.Test](#grader_supportgradelibtest) + - [Preprocessors](#preprocessors) + - [Input checks](#input-checks) + - [grader_support.run.run()](#grader_supportrunrun) +- [HTML result rendering](#html-result-rendering) + + +--- + +## Architecture overview + +There are two distinct "grader" concepts in xqueue-watcher; it is important not to +confuse them: + +``` +XQueue → xqueue_watcher.grader.Grader (watcher-side: receives submissions) + │ + └── grade() calls ──► grading backend + │ + ▼ + grader.py + answer.py + (course-side: defines tests) +``` + +1. **Watcher-side grader** (`xqueue_watcher.grader.Grader` and its subclasses): + Receives a raw submission from XQueue, extracts the student code and grader path, + invokes the grading backend, and formats the result as HTML to send back. + +2. **Course-side grader** (`grader_support.gradelib.Grader` and `grader.py`): + A Python module that defines tests, preprocessors, and input validators for a + specific exercise. This runs inside the grading container (or sandbox). + +When grading a non-Python language you only need to replace the grading backend — the +watcher-side interface remains the same. + + +--- + +## The watcher-side Grader interface + +### Grader base class + +`xqueue_watcher.grader.Grader` is the abstract base class for all watcher-side graders. +To implement a custom grader, subclass it and override `grade()`: + +```python +from xqueue_watcher.grader import Grader + +class MyGrader(Grader): + def grade(self, grader_path, grader_config, student_response): + # ... run grading logic ... + return { + 'correct': True, + 'score': 1.0, + 'tests': [ + ('Test description', 'Long description', True, 'expected\n', 'actual\n') + ], + 'errors': [], + } +``` + +**Constructor parameters** (passed as `KWARGS` in conf.d): + +| Parameter | Default | Description | +|-----------|---------|-------------| +| `grader_root` | `'/tmp/'` | Absolute path to the root directory containing grader scripts. The `grader` field from the submission's `grader_payload` is resolved relative to this path. | +| `fork_per_item` | `True` | Fork a new process for each submission. `JailedGrader` and `ContainerGrader` set this to `False`. | +| `logger_name` | module name | Name of the Python logger to use. | + +**`grade()` signature:** + +```python +def grade(self, grader_path: Path, grader_config: dict, student_response: str) -> dict: +``` + +| Argument | Type | Description | +|----------|------|-------------| +| `grader_path` | `pathlib.Path` | Absolute path to the `grader.py` file for this problem (already validated to be within `grader_root`). | +| `grader_config` | `dict` | Parsed JSON from the submission's `grader_payload`. Always contains `"grader"` (the relative path); may contain additional course-defined keys. | +| `student_response` | `str` | The raw student-submitted code as a string. | + +The base class raises `NotImplementedError`. Subclasses must override this method. + +### Return value of grade() + +`grade()` must return a `dict` with the following keys: + +```python +{ + 'correct': bool, # True if the overall submission is correct + 'score': float, # 0.0 – 1.0 (fraction of tests passed) + 'tests': list, # list of per-test result tuples (see below) + 'errors': list[str], # list of error messages to show the student +} +``` + +Each entry in `tests` is a 5-tuple: + +``` +(short_description, long_description, correct, expected_output, actual_output) +``` + +| Position | Type | Description | +|----------|------|-------------| +| 0 | `str` | Short test description (shown as a heading). | +| 1 | `str` | Long test description (can be empty string). | +| 2 | `bool` | Whether this individual test passed. | +| 3 | `str` | Expected output (from the staff answer). | +| 4 | `str` | Actual output (from the student submission). | + +When `errors` is non-empty the overall result is marked `ERROR` regardless of +`correct`. Tests that did not run may be omitted from `tests`. + +### Submission payload format + +xqueue-watcher receives submissions from XQueue in this structure: + +```json +{ + "xqueue_body": "{\"student_response\": \"...\", \"grader_payload\": \"...\", \"student_info\": {...}}", + "xqueue_files": {} +} +``` + +The `grader_payload` field is a JSON string (double-encoded) that must contain at least: + +```json +{"grader": "relative/path/to/grader.py"} +``` + +The `grader` path is resolved relative to `grader_root`. Path traversal sequences +(`..`) are rejected. The resolved path must remain within `grader_root`. + + +--- + +## Built-in grader implementations + +### Grader (base, no sandbox) + +`xqueue_watcher.grader.Grader` + +The base class — does not implement `grade()`. Used directly only when a fully custom +`grade()` implementation is provided. When subclassed and `grade()` is left unimplemented +an exception is raised for every submission. + +### JailedGrader (AppArmor sandbox) + +`xqueue_watcher.jailedgrader.JailedGrader` + +Runs Python submissions inside [CodeJail](https://github.com/openedx/codejail), which +uses Linux AppArmor to restrict what sandboxed code can do. + +> **Note**: Requires an AppArmor-enabled host and the optional `codejail` dependency. +> This grader is **not suitable for Kubernetes** deployments. Use `ContainerGrader` +> instead. + +**Additional `KWARGS`**: + +| Key | Default | Description | +|-----|---------|-------------| +| `codejail_python` | `"python"` | Name of the CodeJail sandbox to use (as configured with `jail_code.configure()`). | + +**`CODEJAIL` handler config** (configures CodeJail in the manager): + +```json +{ + "HANDLER": "xqueue_watcher.jailedgrader.JailedGrader", + "CODEJAIL": { + "name": "python", + "bin_path": "/path/to/sandbox/python", + "user": "sandbox_username", + "limits": { + "CPU": 1, + "VMEM": 536870912 + } + }, + "KWARGS": { + "grader_root": "/path/to/graders/" + } +} +``` + +`JailedGrader` expects `grader.py` and `answer.py` to exist in the same directory. +It runs both through the Python grading pipeline described in +[The Python grading pipeline](#the-python-grading-pipeline-grader_support). + +### ContainerGrader (Docker / Kubernetes) + +`xqueue_watcher.containergrader.ContainerGrader` + +The recommended grader for Kubernetes deployments. Runs each submission in an +isolated container (a Kubernetes Job or a local Docker container). + +**`KWARGS`**: + +| Key | Env override | Default | Description | +|-----|-------------|---------|-------------| +| `grader_root` | — | required | Path to the grader directory inside the container (or bind-mounted from the host for the Docker backend). | +| `image` | — | required | Docker image to run for grading. Must extend `grader-base`. | +| `backend` | `XQWATCHER_GRADER_BACKEND` | `"kubernetes"` | `"kubernetes"` or `"docker"`. | +| `namespace` | `XQWATCHER_GRADER_NAMESPACE` | `"default"` | Kubernetes namespace for grading Jobs. | +| `cpu_limit` | `XQWATCHER_GRADER_CPU_LIMIT` | `"500m"` | CPU limit for grading containers. | +| `memory_limit` | `XQWATCHER_GRADER_MEMORY_LIMIT` | `"256Mi"` | Memory limit. | +| `timeout` | `XQWATCHER_GRADER_TIMEOUT` | `20` | Max wall-clock seconds per grading job. | +| `docker_host_grader_root` | `XQWATCHER_DOCKER_HOST_GRADER_ROOT` | `None` | Host-side path to `grader_root` when xqueue-watcher runs in Docker. | +| `image_pull_policy` | — | auto | Kubernetes `imagePullPolicy`. Auto-detected from image ref: `"IfNotPresent"` for digest refs, `"Always"` for tag refs. | +| `poll_image_digest` | — | `false` | Resolve tag to digest in the background; use pinned digest for grading Jobs. | +| `digest_poll_interval` | — | `300` | Seconds between digest resolution polls. | + +See [Operator Guide — ContainerGrader](operators.md#containergrader-docker--kubernetes) +for full deployment guidance. + + +--- + +## Implementing a custom Python grader + +To add custom logic at the watcher level (for example, to call an external API or +apply institution-specific rules before returning a result), subclass +`xqueue_watcher.grader.Grader`: + +```python +# my_package/mygrader.py +from xqueue_watcher.grader import Grader + +class MyGrader(Grader): + def __init__(self, rubric_path, **kwargs): + super().__init__(**kwargs) + self.rubric_path = rubric_path + + def grade(self, grader_path, grader_config, student_response): + # Call the base ContainerGrader logic, a subprocess, an API, etc. + # Must return a dict matching the schema in "Return value of grade()". + ... +``` + +Register it in conf.d: + +```json +{ + "HANDLER": "my_package.mygrader.MyGrader", + "KWARGS": { + "grader_root": "/graders/", + "rubric_path": "/rubrics/course-101.json" + } +} +``` + +The `my_package` module must be importable from the Python environment where +xqueue-watcher runs. + + +--- + +## Supporting other languages + +xqueue-watcher is not limited to Python. Two strategies exist for grading +submissions in other languages. + +### Strategy 1: Custom watcher-side Grader subclass + +Write a `Grader` subclass whose `grade()` method invokes an external tool or service +to run and evaluate the student submission. The subclass is responsible for: + +- Running the student code in an appropriate sandbox. +- Collecting test results. +- Returning a dict matching the [grade() return schema](#return-value-of-grade). + +Example skeleton for a Java grader: + +```python +import subprocess +from xqueue_watcher.grader import Grader + +class JavaGrader(Grader): + def grade(self, grader_path, grader_config, student_response): + # Write the student submission to a temp file + # Compile and run using javac / java in a subprocess + # Parse the test output + # Return results dict + ... +``` + +This approach is suitable when you control the execution environment (e.g. the grader +runs directly on a prepared VM or in a container that already has the required runtime +installed). + +### Strategy 2: Custom grader container image + +The `ContainerGrader` passes the student submission to the container via the +`SUBMISSION_CODE` environment variable and reads the grade result from the container's +stdout (see [The grader container protocol](#the-grader-container-protocol)). + +You can replace the Python-based entrypoint in the container with any program that +honours this protocol, making `ContainerGrader` language-agnostic. + +**Steps:** + +1. Write a grading entrypoint in your language of choice that reads + `SUBMISSION_CODE` from the environment, runs the tests, and prints the result + JSON to stdout. + +2. Build a Docker image that uses this entrypoint and includes the required runtime + and your grader scripts. + +3. Reference the image in your conf.d `KWARGS`. + +See the next section for the exact protocol your container must implement. + + +--- + +## The grader container protocol + +`ContainerGrader` communicates with grading containers through a simple +environment-variable-in / JSON-out protocol. Any container that implements this +protocol can be used as a grader backend, regardless of programming language. + +### Inputs + +The container receives the following environment variables: + +| Variable | Description | +|----------|-------------| +| `SUBMISSION_CODE` | The raw student submission as a UTF-8 string. Always set; may be empty if the student submitted nothing. | +| `GRADER_LANGUAGE` | BCP-47 language tag for i18n in feedback messages (e.g. `"en"`, `"es"`). Defaults to `"en"`. | +| `HIDE_OUTPUT` | If `"1"`, `"true"`, or `"yes"`, omit per-test output details from the result (students see only correct/incorrect). Defaults to `"0"`. | +| `GRADER_DEBUG` | If `"1"`, `"true"`, or `"yes"`, print step-by-step debug output to stderr. Defaults to `"0"`. | + +The container is also started with command-line arguments: + +``` + GRADER_PATH SEED +``` + +| Argument | Description | +|----------|-------------| +| `GRADER_PATH` | Absolute path (inside the container) to the grader definition file for this problem. | +| `SEED` | Integer random seed for reproducibility. Both the staff answer and the student submission must use this seed. | + +### Output + +The container must write a single JSON object to **stdout** and then exit. No other +output should appear on stdout (use stderr for diagnostics). + +```json +{ + "errors": ["optional error message visible to student"], + "tests": [ + ["Short description", "Long description", true, "expected output\n", "actual output\n"] + ], + "correct": true, + "score": 1.0 +} +``` + +| Field | Type | Description | +|-------|------|-------------| +| `errors` | `list[str]` | Error messages shown to the student. Empty list if no errors. | +| `tests` | `list` | Per-test result tuples: `[short_desc, long_desc, correct, expected, actual]`. May be empty when `errors` is non-empty. | +| `correct` | `bool` | Whether the overall submission is considered correct. | +| `score` | `float` | Score in the range `[0.0, 1.0]`. | + +The `ContainerGrader` parses the **last line** of stdout as JSON, so debug output +written to stdout before the final JSON line will break parsing. Always write debug +output to **stderr**. + +### Exit codes + +| Exit code | Meaning | +|-----------|---------| +| `0` | Grading completed (result JSON is on stdout). | +| non-zero | Grading failed; `ContainerGrader` raises a `RuntimeError` and the submission is returned as an error to the student. | + +The container is always killed after `timeout` seconds regardless of exit status. + + +--- + +## The Python grading pipeline (grader_support) + +For Python-based graders the `grader_support` package provides a ready-made grading +framework. The built-in `grader-base` Docker image runs this pipeline automatically +via `grader_support.entrypoint`. + +### grader_support.gradelib.Grader + +The course-side grader object. An instance named `grader` must be assigned at module +level in `grader.py`. + +```python +from grader_support import gradelib + +grader = gradelib.Grader() +``` + +**Methods:** + +| Method | Description | +|--------|-------------| +| `add_test(test)` | Append a `Test` object to the test suite. | +| `add_preprocessor(fn)` | Append a preprocessor function. Preprocessors are applied in order to both the staff answer and the student submission before any test runs. | +| `add_input_check(check)` | Append an input check function. Checks run before preprocessing; a non-None return value aborts grading with an error message. | +| `add_tests_from_class(cls)` | Add tests from a class: each method starting with `test_` becomes a `Test`. | +| `tests()` | Return the list of `Test` objects. | +| `input_errors(submission_str)` | Run all input checks and return a list of error strings. | +| `preprocess(submission_str)` | Apply all preprocessors and return the result. | + +### grader_support.gradelib.Test + +Represents a single test case. + +```python +test = gradelib.Test( + test_fn, # callable: (submission_module) -> None, printing to stdout + short_description, # str: concise description shown in feedback + detailed_description, # str: longer description (can be '') + compare=None, # optional callable: (expected_str, actual_str) -> bool +) +``` + +The `test_fn` callable receives the imported submission module and should print +something to stdout. The printed output is compared to the staff answer's output for +the same test. + +**`compare_results(expected, actual) -> bool`** + +The default comparison is `expected == actual`. Override for numeric tolerance, +ordering-independent comparison, etc.: + +```python +def numeric_compare(expected, actual): + try: + return abs(float(expected.strip()) - float(actual.strip())) < 1e-4 + except (ValueError, TypeError): + return False + +test = gradelib.Test(test_fn, "Test precision", compare=numeric_compare) +``` + +`compare_results` may also raise `gradelib.EndTest(message)` to produce a custom +error message appended to the student's output. + +**Built-in test helpers:** + +| Name | Description | +|------|-------------| +| `gradelib.InvokeStudentFunctionTest(fn_name, args, ...)` | Call `submission_module.(*args)` and print the result. | +| `gradelib.ExecWrappedStudentCodeTest(environment, ...)` | Exec the submission code (pre-wrapped by `wrap_in_string`) in the given namespace. | +| `gradelib.invoke_student_function(fn_name, args, ...)` | Lower-level function version of `InvokeStudentFunctionTest`. | +| `gradelib.exec_wrapped_code(environment, post_process)` | Lower-level function version of `ExecWrappedStudentCodeTest`. | + +### Preprocessors + +Preprocessors transform the submission text before it is executed. They receive and +return a string. + +Built-in preprocessors: + +| Name | Description | +|------|-------------| +| `gradelib.fix_line_endings` | Remove `\r` characters. **Installed by default.** | +| `gradelib.wrap_in_string` | Wrap the code in `submission_code = ` so it can be exec'd multiple times. Required before `ExecWrappedStudentCodeTest`. | + +Custom preprocessors: + +```python +def add_import(code): + return "import math\n" + code + +grader.add_preprocessor(add_import) +``` + +Preprocessors run in the order they were added. + +### Input checks + +Input checks receive the raw (unpreprocessed) submission text and return either `None` +(check passed) or a non-empty string (error message to show the student). They run +**before** any preprocessing or code execution. + +Built-in check factories: + +| Factory | Description | +|---------|-------------| +| `gradelib.required_substring(s)` | Fail if `s` is not present in the code. | +| `gradelib.prohibited_substring(s)` | Fail if `s` is present in the code. | +| `gradelib.required_keyword(kw)` | Fail if `kw` does not appear as a token (ignores comments/strings). | +| `gradelib.prohibited_keyword(kw)` | Fail if `kw` appears as a token. | +| `gradelib.must_define_function(name)` | Fail if no `def ` is found. | +| `gradelib.must_define_class(name)` | Fail if no `class ` is found. | +| `gradelib.prohibited_function_definition(name)` | Fail if `def ` is found. | +| `gradelib.required_class_method(class_name, method_name)` | Fail if the named class does not define the named method. | +| `gradelib.prohibited_class_method(class_name, method_name)` | Fail if the named class defines the named method. | +| `gradelib.substring_occurs(s, at_least=N, at_most=M)` | Check that `s` appears a certain number of times. | +| `gradelib.token_occurs(s, at_least=N, at_most=M)` | Same but token-aware (ignores comments/strings). | +| `gradelib.count_non_comment_lines(at_least=N, at_most=M)` | Restrict the number of substantive source lines. | +| `gradelib.one_of_required_keywords(list)` | Fail if none of the given keywords appear. | +| `gradelib.input_check_or(error_msg, *checks)` | Pass if any of the given checks pass. | + +### grader_support.run.run() + +The low-level function that imports a grader and a submission module, runs all tests, +and returns a raw result dict. It is used internally by both `JailedGrader` and the +container entrypoint. + +```python +from grader_support.run import run + +result = run(grader_name, submission_name, seed) +``` + +| Parameter | Description | +|-----------|-------------| +| `grader_name` | Importable module name of the grader (without `.py`). | +| `submission_name` | Importable module name of the submission file (without `.py`). | +| `seed` | Integer random seed. | + +Returns: + +```python +{ + 'grader': {'status': 'ok', 'stdout': '...', 'exception': None}, + 'submission': {'status': 'ok', 'stdout': '...', 'exception': None}, + 'results': [("short desc", "long desc", "output"), ...], + 'exceptions': 0, +} +``` + +`status` is one of `'ok'`, `'error'`, `'caught'`, or `'notrun'`. + +You can call `run()` directly for unit-testing your grader scripts: + +```python +import sys +sys.path.insert(0, '/path/to/exercise-3/') + +from grader_support.run import run + +output = run('grader', 'answer', seed=42) +assert output['grader']['status'] == 'ok' +assert output['submission']['status'] == 'ok' +assert all(r[2] != '' for r in output['results']) # all tests produced output +``` + + +--- + +## HTML result rendering + +`xqueue_watcher.grader.Grader.render_results()` converts the `grade()` return dict +into an HTML string for display in the LMS. The HTML structure is: + +```html +
+
Test results
+
+
CORRECT | INCORRECT | ERROR
+
+ +
...
+
...
+ +
...
+
+
+
+``` + +All user-supplied strings (test descriptions, output, error messages) are HTML-escaped +before insertion. + +If you need a different visual layout, override `render_results()` in your `Grader` +subclass. The method signature is: + +```python +def render_results(self, results: dict) -> str: + ... +``` + +where `results` is the dict returned by `grade()`. diff --git a/docs/operators.md b/docs/operators.md new file mode 100644 index 0000000..1eb8164 --- /dev/null +++ b/docs/operators.md @@ -0,0 +1,461 @@ +# Operator Guide + +This guide covers installing, configuring, and running xqueue-watcher in production and +development environments. + +## Table of Contents + +- [Prerequisites](#prerequisites) +- [Installation](#installation) +- [Configuration layout](#configuration-layout) +- [Configuration reference](#configuration-reference) + - [xqwatcher.json](#xqwatcherjson) + - [logging.json](#loggingjson) + - [xqueue_servers.json](#xqueue_serversjson) + - [conf.d queue files](#confd-queue-files) +- [Environment variables](#environment-variables) +- [Running xqueue-watcher](#running-xqueue-watcher) + - [Directly (bare metal / virtualenv)](#directly-bare-metal--virtualenv) + - [Docker / docker-compose](#docker--docker-compose) + - [Kubernetes](#kubernetes) +- [Security considerations](#security-considerations) +- [Metrics](#metrics) +- [Logging](#logging) + + +--- + +## Prerequisites + +- Python 3.10 or newer (for bare-metal installs) +- A running [XQueue](https://github.com/openedx/xqueue) service +- Docker or a Kubernetes cluster (required when using `ContainerGrader`) + + +## Installation + +The recommended way to install dependencies is with [uv](https://github.com/astral-sh/uv): + +```bash +git clone https://github.com/openedx/xqueue-watcher.git +cd xqueue-watcher +uv sync +``` + +Alternatively, with pip: + +```bash +pip install -e . +``` + +The `kubernetes` and `docker` Python packages are optional extras required only by +`ContainerGrader`: + +```bash +uv sync --extra kubernetes # for the Kubernetes backend +uv sync --extra docker # for the Docker backend +``` + + +## Configuration layout + +Keep course-specific files outside the xqueue-watcher repository so you can update +the watcher independently: + +``` +config/ +├── xqwatcher.json # optional: override manager defaults +├── logging.json # optional: Python logging dictConfig +├── xqueue_servers.json # named XQueue server references (keep out of VCS) +└── conf.d/ + ├── my-course.json # one file per queue (or group of queues) + └── another-course.json +``` + +Start xqueue-watcher by pointing it at the `config/` directory: + +```bash +python -m xqueue_watcher -d /path/to/config +``` + +The watcher will: +1. Load `logging.json` (if present) or fall back to stdout logging. +2. Load `xqwatcher.json` (if present) or fall back to defaults. +3. Load `xqueue_servers.json` (if present) for named server references. +4. Load every `*.json` file in `conf.d/` as a queue configuration. + + +## Configuration reference + +### xqwatcher.json + +All keys are optional; missing keys fall back to the defaults shown below. + +```json +{ + "HTTP_BASIC_AUTH": null, + "POLL_TIME": 10, + "REQUESTS_TIMEOUT": 1, + "POLL_INTERVAL": 1, + "LOGIN_POLL_INTERVAL": 5, + "FOLLOW_CLIENT_REDIRECTS": false +} +``` + +| Key | Default | Description | +|-----|---------|-------------| +| `HTTP_BASIC_AUTH` | `null` | `[username, password]` for HTTP Basic Auth on all outbound requests. | +| `POLL_TIME` | `10` | Seconds between liveness checks of watcher threads. | +| `REQUESTS_TIMEOUT` | `1` | Timeout (seconds) for outbound HTTP requests to XQueue. | +| `POLL_INTERVAL` | `1` | Seconds between queue-poll attempts per watcher thread. | +| `LOGIN_POLL_INTERVAL` | `5` | Seconds between login-retry attempts when authentication fails. | +| `FOLLOW_CLIENT_REDIRECTS` | `false` | Follow HTTP redirects on XQueue requests. | + + +### logging.json + +A standard Python [logging dictConfig](https://docs.python.org/3/library/logging.config.html#logging-config-dictschema) +object. If this file is absent xqueue-watcher writes structured log lines to stdout +(suitable for container runtimes and Kubernetes). + +Example — write to a rotating file: + +```json +{ + "version": 1, + "disable_existing_loggers": false, + "handlers": { + "file": { + "class": "logging.handlers.RotatingFileHandler", + "filename": "/var/log/xqueue-watcher/xqueue-watcher.log", + "maxBytes": 10485760, + "backupCount": 5, + "formatter": "standard" + } + }, + "formatters": { + "standard": { + "format": "%(asctime)s %(levelname)s %(process)d [%(name)s] %(filename)s:%(lineno)d - %(message)s" + } + }, + "root": { + "handlers": ["file"], + "level": "INFO" + } +} +``` + +For Kubernetes and container environments, omit `logging.json` entirely and control +the log level with the `XQWATCHER_LOG_LEVEL` environment variable. + + +### xqueue_servers.json + +Defines named XQueue server connections so that credentials are kept in one place and +out of the per-queue conf.d files. This file should **never be committed to version +control**; in Kubernetes it is injected as a Secret. + +```json +{ + "default": { + "SERVER": "http://xqueue-svc:18040", + "AUTH": ["lms_user", "s3cr3t"] + }, + "staging": { + "SERVER": "http://xqueue-staging:18040", + "AUTH": ["lms_user", "staging_pass"] + } +} +``` + +Each key is a server name that can be referenced from conf.d files using `SERVER_REF`. + + +### conf.d queue files + +Each JSON file in `conf.d/` may configure one or more queues. A minimal file using a +named server reference: + +```json +{ + "course-101-grading": { + "SERVER_REF": "default", + "CONNECTIONS": 2, + "HANDLERS": [ + { + "HANDLER": "xqueue_watcher.containergrader.ContainerGrader", + "KWARGS": { + "grader_root": "/graders/course-101/", + "image": "registry.example.com/course-101-grader:latest", + "backend": "kubernetes", + "namespace": "grading", + "cpu_limit": "500m", + "memory_limit": "256Mi", + "timeout": 30 + } + } + ] + } +} +``` + +Alternatively, embed server connection details directly (acceptable in non-secret +environments): + +```json +{ + "course-101-grading": { + "SERVER": "http://xqueue-svc:18040", + "AUTH": ["lms_user", "s3cr3t"], + "CONNECTIONS": 1, + "HANDLERS": [...] + } +} +``` + +> **Note**: `SERVER_REF` and `SERVER`/`AUTH` are mutually exclusive. Providing both +> raises a `ValueError` at startup. + +#### Queue configuration keys + +| Key | Required | Description | +|-----|----------|-------------| +| `SERVER` | One of `SERVER` or `SERVER_REF` | XQueue server URL, e.g. `http://xqueue:18040`. | +| `AUTH` | With `SERVER` | `[username, password]` for the XQueue Django user. | +| `SERVER_REF` | One of `SERVER` or `SERVER_REF` | Name of a server from `xqueue_servers.json`. | +| `CONNECTIONS` | No (default: 1) | Number of polling threads to spawn for this queue. | +| `HANDLERS` | Yes | List of handler objects (see below). | +| `NAME_OVERRIDE` | No | Poll a different queue name than the config key. | + +#### Handler configuration keys + +| Key | Required | Description | +|-----|----------|-------------| +| `HANDLER` | Yes | Dotted Python path to a `Grader` subclass, e.g. `xqueue_watcher.containergrader.ContainerGrader`. | +| `KWARGS` | No | Keyword arguments passed to the handler constructor. | +| `CODEJAIL` | No | CodeJail sandbox configuration (legacy; prefer `ContainerGrader`). | + +See [Grader Interface](grader-interface.md) for the full list of built-in handlers and +their `KWARGS`. + + +--- + +## Environment variables + +All environment variables use the `XQWATCHER_` prefix. They override or supplement +JSON file configuration and are the recommended way to inject settings in containers. + +### Manager settings + +| Variable | Default | Description | +|----------|---------|-------------| +| `XQWATCHER_LOG_LEVEL` | `INFO` | Root log level (`DEBUG`, `INFO`, `WARNING`, `ERROR`, `CRITICAL`). | +| `XQWATCHER_HTTP_BASIC_AUTH` | — | HTTP Basic Auth as `username:password`. | +| `XQWATCHER_POLL_TIME` | `10` | Seconds between liveness checks of watcher threads. | +| `XQWATCHER_REQUESTS_TIMEOUT` | `1` | Timeout (seconds) for outbound HTTP requests. | +| `XQWATCHER_POLL_INTERVAL` | `1` | Seconds between queue-poll attempts. | +| `XQWATCHER_LOGIN_POLL_INTERVAL` | `5` | Seconds between login-retry attempts. | +| `XQWATCHER_FOLLOW_CLIENT_REDIRECTS` | `false` | Follow HTTP redirects. | +| `XQWATCHER_VERIFY_TLS` | `true` | Verify TLS certificates. **Never set to `false` in production.** | + +### ContainerGrader defaults + +These set deployment-wide defaults that individual conf.d queue configs may override. + +| Variable | Default | Description | +|----------|---------|-------------| +| `XQWATCHER_GRADER_BACKEND` | `kubernetes` | Container backend: `kubernetes` or `docker`. | +| `XQWATCHER_GRADER_NAMESPACE` | `default` | Kubernetes namespace for grading Jobs. | +| `XQWATCHER_GRADER_CPU_LIMIT` | `500m` | CPU limit for grading containers. | +| `XQWATCHER_GRADER_MEMORY_LIMIT` | `256Mi` | Memory limit for grading containers. | +| `XQWATCHER_GRADER_TIMEOUT` | `20` | Max wall-clock seconds per grading job. | +| `XQWATCHER_DOCKER_HOST_GRADER_ROOT` | — | Host-side absolute path to the grader root when xqueue-watcher itself runs in a Docker container (see [Docker section](#docker--docker-compose)). | +| `XQWATCHER_SUBMISSION_SIZE_LIMIT` | `1048576` | Maximum submission size in bytes (1 MB). Larger submissions are rejected before a container is launched. | + + +--- + +## Running xqueue-watcher + +### Directly (bare metal / virtualenv) + +```bash +# Install dependencies +uv sync # or: pip install -e . + +# Run +python -m xqueue_watcher -d /path/to/config +``` + +xqueue-watcher runs in the foreground. Use a process supervisor (systemd, supervisord, +etc.) for production deployments. + + +### Docker / docker-compose + +A `docker-compose.yml` is included for local development. It starts: +- An XQueue service +- xqueue-watcher (mounts `conf.d/` and `data/` as volumes) +- Builds the `grader-base:local` image that your grader images extend + +Before starting, set the host-side path to your grader data: + +```bash +export XQWATCHER_DOCKER_HOST_GRADER_ROOT="$(pwd)/data" +docker compose up +``` + +> **Why `XQWATCHER_DOCKER_HOST_GRADER_ROOT`?** When xqueue-watcher runs inside a +> container and spawns grader containers via the Docker socket, the Docker daemon +> interprets bind-mount source paths relative to the *host* filesystem — not the +> watcher container. This variable tells the watcher what the corresponding host-side +> path is so it can pass the correct path to the Docker daemon. + +To build your own grader image for testing: + +```bash +docker build -f grader_support/Dockerfile.base -t grader-base:local . +# Then build your course-specific grader image on top of grader-base:local +``` + +See [Course Team Guide — Local Testing](course-teams.md#local-testing-without-edx) for +a worked example. + + +### Kubernetes + +Manifests are provided in `deploy/kubernetes/`: + +``` +deploy/kubernetes/ +├── kustomization.yaml +├── configmap.yaml # conf.d queue configs +├── secret.yaml.template # xqueue_servers.json (fill in credentials) +├── deployment.yaml # xqueue-watcher Deployment +├── serviceaccount.yaml # ServiceAccount for the watcher pod +├── rbac.yaml # RBAC for creating/watching grading Jobs +└── networkpolicy.yaml # Egress restriction for grading pods +``` + +**Quickstart:** + +1. Copy `deploy/kubernetes/secret.yaml.template` to `secret.yaml`, fill in your XQueue + credentials, and apply it: + + ```bash + cp deploy/kubernetes/secret.yaml.template deploy/kubernetes/secret.yaml + # Edit secret.yaml — add SERVER and AUTH values + kubectl apply -f deploy/kubernetes/secret.yaml + ``` + +2. Edit `deploy/kubernetes/configmap.yaml` to add your queue configurations. + +3. Apply the remaining manifests: + + ```bash + kubectl apply -k deploy/kubernetes/ + ``` + +**Security posture for grading Jobs:** + +The `ContainerGrader` Kubernetes backend applies a defence-in-depth approach to grading +containers: + +- Non-root user (UID 1000), read-only root filesystem +- All Linux capabilities dropped +- RuntimeDefault seccomp profile +- `/tmp` backed by a size-capped `emptyDir` (prevents disk exhaustion) +- No service-account token auto-mounted +- CPU and memory limits enforced via Job spec + +Operators should additionally ensure: + +- The grading namespace enforces the Kubernetes [restricted Pod Security Standard](https://kubernetes.io/docs/concepts/security/pod-security-standards/). +- A `NetworkPolicy` (provided in `deploy/kubernetes/networkpolicy.yaml`) prevents egress + from grading pods. +- Grader images are signed and scanned; use digest-pinned references in production. +- The TTL controller (`--feature-gates=TTLAfterFinished=true`) is enabled so completed + grading Jobs are cleaned up automatically. +- PID limits are enforced via a namespace `LimitRange` or `--pod-pids-limit` on the + kubelet (the Job spec alone cannot set PID limits). + +**Using `poll_image_digest` for automatic image updates:** + +If you push new grader images to a tag (e.g. `:latest`) and want Kubernetes nodes to +always use the most recent version without relying on `imagePullPolicy: Always` for +every pod, enable digest polling: + +```json +{ + "KWARGS": { + "image": "registry.example.com/course-grader:latest", + "poll_image_digest": true, + "digest_poll_interval": 300 + } +} +``` + +This starts a background thread that periodically resolves the tag to its current digest +(`repo@sha256:…`). Grading Jobs use the pinned digest reference, ensuring nodes pull +the correct image exactly once. + + +--- + +## Security considerations + +- **Never commit `xqueue_servers.json`** to version control. It contains XQueue + credentials. Use a Kubernetes Secret or an equivalent secrets-management system. +- **Use `SERVER_REF`** in conf.d queue files rather than inline `SERVER`/`AUTH` so + queue configs are safe to commit. +- **Pin grader images by digest** in production (`repo@sha256:…`) to prevent supply-chain + attacks via mutable tags. +- **Apply the `NetworkPolicy`** in `deploy/kubernetes/networkpolicy.yaml` to prevent + student code from making outbound network requests during grading. +- **Set `XQWATCHER_VERIFY_TLS=true`** (the default) in all environments. The `false` + value exists solely for development with self-signed certificates. + + +--- + +## Metrics + +xqueue-watcher exposes OpenTelemetry metrics via the `xqueue_watcher.metrics` module. +The following instruments are recorded: + +| Instrument | Type | Description | +|------------|------|-------------| +| `xqwatcher.process_item` | Counter | Submissions received for grading. | +| `xqwatcher.replies` | Counter | Successful replies sent back to XQueue. | +| `xqwatcher.grader_payload_errors` | Counter | Submissions with unparseable grader payloads. | +| `xqwatcher.grading_time` | Histogram | Wall-clock grading time in seconds. | + +Configure an OTLP exporter by setting the standard `OTEL_EXPORTER_OTLP_ENDPOINT` +environment variable before starting xqueue-watcher. + + +--- + +## Logging + +xqueue-watcher uses Python's standard `logging` module. The root logger name is +`xqueue_watcher`. + +**Without `logging.json`** (default for containers): structured lines are written to +stdout with the format: + +``` +2024-01-15 12:00:00,123 INFO 42 [xqueue_watcher.manager] manager.py:176 - Starting +``` + +The level defaults to `INFO` and can be raised or lowered with `XQWATCHER_LOG_LEVEL`. + +**With `logging.json`**: the file is loaded as a Python logging +[dictConfig](https://docs.python.org/3/library/logging.config.html#logging-config-dictschema), +giving full control over handlers, formatters, and per-logger levels. + +**Debug grader containers**: set `GRADER_DEBUG=1` in the environment of grading +containers to print step-by-step trace output to stderr. Kubernetes captures both +stdout and stderr in pod logs, so `kubectl logs ` will show both the JSON result +and the debug trace.