From c1756922f65a2b56c1632c33df0b821bc310bea1 Mon Sep 17 00:00:00 2001 From: Gordon Bean Date: Fri, 27 Mar 2026 16:13:51 -0600 Subject: [PATCH 01/14] backlog --- design/download.md | 5 +- design/trusted.md | 305 +++++++++++++- .../backlog/agent-environment-management.md | 384 ++++++++++++++++++ .../backlog/list-skills-builtin-filter.md | 37 ++ .../backlog/third-party-skill-trees.md | 280 +++++++++++++ 5 files changed, 1001 insertions(+), 10 deletions(-) create mode 100644 src/governing_docs/backlog/agent-environment-management.md create mode 100644 src/governing_docs/backlog/list-skills-builtin-filter.md create mode 100644 src/governing_docs/backlog/third-party-skill-trees.md diff --git a/design/download.md b/design/download.md index c9f84ec..d0782a6 100644 --- a/design/download.md +++ b/design/download.md @@ -43,9 +43,10 @@ Behavior: ## Scope Boundaries -- `get role` and `get skill` should remain unchanged - remote content should not be fetched dynamically during role or skill loading -- this design does not yet define trust verification for downloaded content +- this design does not yet define the full trust verification model for downloaded content +- future trust work may add verification-state checks to `get role` and `get skill` without making + loading networked - this design does not yet define an interactive merge strategy for dirty local copies ## Implementation Notes diff --git a/design/trusted.md b/design/trusted.md index 5574bcd..779f137 100644 --- a/design/trusted.md +++ b/design/trusted.md @@ -1,12 +1,301 @@ -# TODO: Trusted Content Framework +# Trusted Content Verification Framework -Design and implement a framework for verifying and tracking trusted remote content used by `myteam download` and `myteam update`. +## Summary -This should cover: +Downloaded `myteam` content should carry explicit verification state. Unverified downloaded content +must not be loaded directly through `myteam get role ...` or `myteam get skill ...`. -- how trust is established for a remote source -- how trust state is recorded locally -- how trust is checked during download and update -- how changes in trust status are surfaced to the user +Instead, `myteam get ...` should detect that the target node belongs to unverified downloaded +content and return instructions telling the current agent to delegate review to a built-in verifier +role, then reload the original node after verification succeeds. -This document is intentionally a placeholder until the trust model is designed. +When downloaded content is updated, its verification state should revert to unverified. + +This keeps runtime loading local-only while adding a review gate between "content exists on disk" +and "agents are allowed to execute its `load.py`." + +## Problem + +The current backlog around downloads and trust is too thin for the actual risk. + +Today: + +- `myteam download` writes remote content into the local tree +- `myteam get ...` executes `load.py` from the resolved node +- there is no verification state between download and execution + +That means downloaded content becomes executable agent instruction immediately after installation. + +The problem gets sharper once updates exist: + +- a subtree that was previously reviewed can change on update +- the new content should not inherit the old review status +- agents need a standard path for re-verification + +## Goals + +- Require explicit verification before downloaded content can be loaded. +- Reset verification state after any update that changes managed content. +- Keep the trust boundary local and inspectable. +- Route review work through a built-in verifier role rather than ad hoc project instructions. +- Make `myteam get ...` enforce the gate consistently for downloaded roles and skills. +- Preserve the existing local filesystem execution model once content is verified. + +## Non-Goals + +- Cryptographic signing as the only trust mechanism. +- Automatic semantic review of downloaded content. +- Dynamic remote checks during `get role` or `get skill`. +- Silently executing unverified content with only a warning. + +## Core Model + +Downloaded content moves through explicit states: + +1. `unverified` +2. `verified` +3. `stale` + +Practical simplification: + +- `stale` can be represented as `unverified` with a reason such as `updated` + +The important rule is: + +- only `verified` content may be executed through `myteam get ...` + +## Verification Unit + +Verification should attach to a downloaded managed subtree, not only to individual files. + +That subtree is the same natural unit introduced by the download/update design: + +- one downloaded roster or subtree install +- one metadata file at its root +- one provenance record +- one verification status record + +This keeps trust state aligned with provenance and update operations. + +## Metadata Model + +Extend the download metadata file, or add a sibling trust file, with fields such as: + +- source repo +- source path or roster +- source ref +- installed timestamp +- installed content fingerprint +- verification status +- verification timestamp +- verifier identity or note +- verification basis +- status reason + +A single hidden file such as `.myteam-source.yml` can hold both provenance and verification data if +that stays readable. + +Suggested trust fields: + +```yaml +verification: + status: unverified + reason: downloaded + content_fingerprint: abc123 + verified_at: + verified_by: + basis: +``` + +When content is updated and the fingerprint changes: + +- `status` becomes `unverified` +- `reason` becomes `updated` +- previous verification facts are cleared or retained only as history, not as active status + +## `myteam get ...` Gate + +`myteam get role ...` and `myteam get skill ...` should check verification status before executing +`load.py` for downloaded content. + +Behavior: + +- if the node is project-authored and not managed by download metadata, proceed normally +- if the node is downloaded and verified, proceed normally +- if the node is downloaded and unverified, do not execute its `load.py` + +Instead, `myteam get ...` should print a blocking instruction message describing: + +- that the requested content is downloaded and unverified +- which managed subtree owns it +- why it is unverified, such as `downloaded` or `updated` +- that the agent must delegate to the built-in verifier role +- that the agent should reload the original requested node after verification completes + +This keeps the trust policy in the CLI, not in ad hoc project prompts. + +## Built-In Verifier Role + +Verification should be handled by a built-in role dedicated to reviewing unverified downloaded +content. + +Desired behavior: + +- `myteam` ships a built-in verifier role +- the role receives the target subtree and provenance context +- the role guides an agent through inspecting the downloaded content and deciding whether to mark it + verified + +This is intentionally a role, not only a skill, because the task is a distinct responsibility: + +- review downloaded content +- decide whether it is acceptable to admit into the live instruction tree +- update verification state accordingly + +Example instruction flow from `myteam get skill vendor/foo`: + +1. Agent runs `myteam get skill vendor/foo` +2. `myteam` detects the owning downloaded subtree is unverified +3. `myteam` prints instructions such as: + `Content for 'vendor/foo' is unverified. Delegate to built-in role 'builtins/verifier' with the managed subtree path and then rerun 'myteam get skill vendor/foo'.` +4. The agent delegates review to the verifier role +5. The verifier role reviews and, if approved, marks the subtree verified +6. The original agent reruns `myteam get skill vendor/foo` + +## Built-In Role Packaging + +Current architecture has packaged built-in skills, not packaged built-in roles. + +This backlog item therefore implies one of: + +- add a packaged built-in role mechanism parallel to built-in skills +- or broaden the provider design so built-in roles and skills can both be packaged + +The verification design should not force the implementation detail yet, but the product requirement +is clear: + +- there must be a built-in verifier role available without requiring projects to author it manually + +That built-in role may later share the same provider/resolver architecture as built-in skills. + +## Verification Workflow + +The verifier role should guide an agent through checks such as: + +- inspect provenance metadata +- inspect diff against prior verified version if this is an update +- inspect `load.py`, instruction files, and tools in the subtree +- look for unexpected executables, shell bootstraps, or suspicious environment setup +- decide whether to approve, reject, or escalate to the user + +If approved: + +- write updated verification metadata + +If rejected: + +- leave status as unverified or mark as rejected +- explain why the content should not be loaded + +## Update Behavior + +After `myteam update` modifies a managed subtree: + +- verification status must revert to unverified +- the reason should indicate update or changed fingerprint + +This should happen even if: + +- the source repo is the same +- the target path is the same +- the previous version was verified + +Trust attaches to reviewed content, not only to the source location. + +## Scope Boundaries With Download Design + +This work should integrate with the download/update provenance design rather than replace it. + +Expected relationship: + +- `download.md` defines install metadata and update flow +- this trust design adds verification state and load gating on top of that metadata + +The rule in `download.md` that "`get role` and `get skill` remain unchanged" should be revised. +They should remain local-only and filesystem-based, but they should gain trust-state enforcement. + +## Failure and UX Policy + +Unverified content should be blocked, not warned. + +Recommended behavior for blocked loads: + +- exit non-zero +- print the verifier-delegation instructions on stdout or stderr in a form an agent can act on +- include the exact command to rerun after verification + +The important thing is that the message be operational, not just diagnostic. + +## Open Design Choice: Role vs Skill + +The request here prefers a built-in verifier role, which is sensible because review is a distinct +responsibility. + +One alternative would be: + +- keep a built-in verifier skill +- tell the current agent to load that skill and perform verification itself + +That is weaker operationally because it does not create a clean handoff boundary. + +Recommended direction: + +- use a built-in verifier role + +## Security Notes + +This feature changes the effective execution boundary: + +- downloaded content is no longer executable merely because it exists on disk +- execution now depends on explicit local verification state + +That is a substantial safety improvement even before introducing signatures or stronger provenance +checks. + +However, verification metadata itself becomes sensitive. We should assume: + +- only explicit verifier actions can mark content verified +- updates cannot preserve prior verification automatically when content changes +- manual local edits to metadata are possible and should be treated as outside the automated trust + guarantee + +## Implementation Plan + +### Phase 1: metadata and gating + +- extend managed subtree metadata with verification state +- teach `download` to initialize downloaded content as unverified +- teach `update` to reset verification state after content changes +- teach `get role` and `get skill` to block unverified downloaded content before executing `load.py` + +### Phase 2: built-in verifier role + +- add packaged built-in role support if needed +- ship a verifier role with clear review instructions +- ensure the blocking `get ...` message points to that role + +### Phase 3: review ergonomics + +- provide diff or tree-summary tools to help the verifier role inspect content +- optionally add commands for marking verification outcomes in a structured way +- consider history or audit logging for repeated updates + +## Open Questions + +- Should verification state live inside the existing download metadata file or a separate trust file? +- Should rejected content have a separate explicit status from unverified? +- Should `myteam get role ...` and `myteam get skill ...` print verifier instructions to stdout, + stderr, or both? +- What is the cleanest packaged built-in role mechanism given today’s built-in-skill-only model? +- Should verification attach only to downloaded subtrees, or also to third-party packaged provider + content in the future? diff --git a/src/governing_docs/backlog/agent-environment-management.md b/src/governing_docs/backlog/agent-environment-management.md new file mode 100644 index 0000000..f828d35 --- /dev/null +++ b/src/governing_docs/backlog/agent-environment-management.md @@ -0,0 +1,384 @@ +# Agent Environment Management Design + +## Summary + +`myteam` needs a first-class way to describe and prepare execution environments for agent tools +without weakening sandbox approval semantics. + +The central constraint is that agent harness permissions are prefix-based. A generic approved prefix +such as: + +- `source .env` +- `set -a; source .env; ...` +- `.venv/bin/python` +- `myteam run ...` + +can become too broad if it effectively means "enter an environment and then do anything." + +So the design should not be "teach agents to activate an environment first." The design should be: + +- declare environments explicitly +- bind tools to specific environments +- resolve each tool to a concrete executable invocation +- keep approval prefixes specific to the actual tool, not to a generic environment bootstrap step + +## Problem + +Current guidance is informal: + +- if a local `.venv` exists, try `venv/bin/python -m myteam ...` +- if a tool-owning role or skill has a `venv`, use that `venv` + +This is not enough as the system grows. + +Problems: + +- there is no explicit environment model in the project tree +- agents have to guess whether to use system Python, a local `venv`, or something else +- environment variables are often loaded through shell setup commands that make approval prefixes + overly powerful +- a single approved prefix can accidentally authorize a large class of unrelated actions +- third-party skills and tools will make environment guessing even less reliable + +## Goals + +- Give `myteam` a clear, inspectable environment model for agent-executed tools. +- Preserve the intent of prefix-based sandbox approvals. +- Make the environment needed for a tool discoverable from the same local tree as the tool itself. +- Reduce ad hoc shell activation and `source`-based workflows. +- Support both Python virtual environments and environment-variable loading as common cases. +- Keep runtime execution local-only and deterministic. + +## Non-Goals + +- Full package-manager orchestration for every ecosystem. +- Replacing the external harness approval model. +- Automatically trusting arbitrary environment-loading scripts. +- Making all commands run through one generic catch-all launcher. + +## Core Design Principle + +Environment setup should be treated as configuration attached to a specific tool invocation, not as +a standalone shell session the agent enters. + +That means the object that should become approvable is not: + +- "activate this environment" + +It is: + +- "run this specific tool, with this declared executable and these declared environment sources" + +## Proposed Model + +### 1. Named runtimes + +Add a declarative runtime concept. A runtime is a named execution environment that can be referenced +by tools. + +Examples: + +- `python/default` +- `python/notebook` +- `canvas/api` +- `node/docs` + +A runtime declaration should include only the information needed to build a concrete process +invocation, for example: + +- runtime kind, such as `python-venv` +- path to interpreter or executable root +- optional environment variable sources +- optional fixed environment variables +- optional working directory rules + +### 2. Tool declarations reference runtimes + +Today tools are just discovered as `.py` files colocated with roles and skills. That is convenient, +but it is not enough to express execution requirements safely. + +We likely need a small manifest layer for tools, for example adjacent metadata that says: + +- this tool's entry point is `list_active_courses.py` +- run it with runtime `canvas/api` +- expose it to agents as `list_active_courses` + +The important shift is that the tool definition owns the environment binding. + +### 3. Resolve to concrete command vectors + +`myteam` should be able to resolve a tool plus runtime to an exact argv, not to a shell snippet. + +Good: + +- `["/abs/project/.venv/bin/python", "canvas/list_active_courses.py"]` +- `["/abs/project/.venv/bin/pytest", "tests/test_cli.py"]` + +Risky: + +- `["bash", "-lc", "source .env; source .venv/bin/activate; pytest ..."]` + +If an environment variable file must be loaded, `myteam` should model that as explicit process +environment construction, not as a generic shell prelude whenever possible. + +### 4. Generated per-tool launchers, not one generic runner + +If the agent harness needs stable prefixes for approval, `myteam` should prefer generating narrow, +per-tool launcher scripts or commands rather than one generic runner. + +For example: + +- `.myteam/.bin/list-active-courses` +- `.myteam/.bin/render-dashboard` +- `.myteam/.bin/pytest-project` + +Each launcher would have a single declared target tool and runtime. Its behavior would be limited to +that binding. + +This is much safer than approving a broad prefix like: + +- `myteam run` +- `bash -lc set -a; source .env; ...` + +because the prefix itself still names the specific capability being granted. + +## Why Per-Tool Launchers + +The harness approval model is about intent. A launcher should preserve that intent at the prefix +level. + +A specific launcher path communicates: + +- what tool is being run +- which environment contract applies +- that the approval is scoped to one declared capability + +A generic runtime-entry command hides too much. + +For example, approving: + +- `.myteam/.bin/list-active-courses` + +is meaningfully narrower than approving: + +- `myteam env exec canvas/api` + +even if both eventually execute the same Python interpreter. + +## Environment Variables + +Environment variables need a stricter model than "source whatever file exists." + +Recommended direction: + +- allow declarative env files, such as `.env` +- allow explicit key allowlists or fixed variables +- load them in `myteam` code or a generated launcher, not through arbitrary shell sourcing, when the + format is structured enough to parse safely + +Near-term supported sources could be intentionally limited: + +- `.env`-style key/value files +- fixed inline variables in manifest metadata + +Avoid, at least initially: + +- arbitrary shell scripts as env sources +- commands whose purpose is "prepare a shell for later commands" + +If users need arbitrary shell sourcing, that should be treated as an escape hatch with a visibly +higher trust bar. + +## Runtime Kinds + +Near-term runtime kinds could be small and explicit. + +### `python-venv` + +Fields: + +- `venv_path` +- optional default module runner or executable names +- optional env file references + +Resolution: + +- use `venv_path/bin/python` +- or use specific executables from `venv_path/bin/...` + +### `python-interpreter` + +Fields: + +- `python_path` +- optional env file references + +### `command-prefix` + +This should exist only as a constrained compatibility escape hatch for cases that cannot yet be +modeled directly. It is risky because it can collapse back into broad approval semantics. + +If included at all, it should: + +- be clearly marked as high-trust +- not be the recommended default +- ideally be excluded from auto-generated approvable launchers + +## Project Structure Options + +There are two plausible homes for runtime metadata. + +### Option A: node-local manifests + +Each role or skill can declare runtimes for the tools it owns. + +Pros: + +- local and composable +- keeps ownership near the tool +- fits the existing colocated-tree model + +Cons: + +- duplicated runtime definitions across neighboring nodes +- harder to share one environment across many tools + +### Option B: project-level runtime registry + +Add a central runtime registry under `.myteam/`, then let tools reference entries by name. + +Pros: + +- easier sharing and auditing +- one place to inspect environment policy +- simpler for per-project standard runtimes + +Cons: + +- weakens locality +- can become a dumping ground if not scoped carefully + +Recommended direction: + +- use a project-level runtime registry +- let node-local tool manifests reference it + +This is the clearest model for reuse and auditing. + +## CLI Surface Ideas + +The CLI should separate three concerns: + +1. describing runtimes +2. materializing launchers +3. running diagnostics + +Possible commands: + +- `myteam env list` +- `myteam env doctor [runtime]` +- `myteam tool list` +- `myteam tool doctor ` +- `myteam tool install-launchers` + +Potentially also: + +- `myteam tool resolve ` + +which would print the exact argv and environment inputs for debugging. + +### Avoid a broad `myteam run` + +A generic `myteam run ` is tempting, but it creates the wrong approval target. Even if it is +useful for humans, it should not be the primary story for agent harness integration. + +If such a command exists, it should be positioned as a debugging convenience, not the core +approvable interface. + +## Suggested File Model + +One possible layout: + +```text +.myteam/ + runtimes.yml + tools/ + list-active-courses.yml + pytest-project.yml + .bin/ + list-active-courses + pytest-project +``` + +Where: + +- `runtimes.yml` declares named runtimes +- `tools/*.yml` declares tools and their runtime binding +- `.bin/` is generated output, not hand-edited source + +This keeps declarations inspectable and launchers disposable. + +## Execution Semantics + +When `myteam` materializes a launcher, the launcher should: + +1. load only the declared env sources +2. set only the declared fixed variables +3. exec the declared interpreter or executable +4. invoke only the declared tool entry point + +It should not: + +- open a generic interactive shell +- source arbitrary shell code unless explicitly configured through a high-trust escape hatch +- provide a reusable environment session for later commands + +## Relationship To Existing `myteam` Concepts + +This environment model fits naturally with existing roles, skills, and tools: + +- roles and skills still provide discovery and instructions +- tools remain the executable units +- runtimes become explicit support objects that tools depend on + +That is better than overloading role `load.py` or skill `load.py` to perform runtime activation. + +## Security Notes + +This feature must be designed around least surprise: + +- an agent should be able to inspect what a launcher will do +- a user should be able to approve one tool without implicitly approving many others +- environment loading should be declarative where possible + +The biggest risk is accidentally reintroducing a generic shell bootstrap under a more structured +name. The design should resist that. + +## Implementation Strategy + +### Phase 1: metadata and resolver + +- define runtime and tool manifests +- implement parsing and validation +- implement command resolution to exact argv plus environment + +### Phase 2: launcher generation + +- generate per-tool launchers under a managed directory such as `.myteam/.bin/` +- keep launchers deterministic and inspectable +- add diagnostics for missing interpreters, missing venvs, and malformed env files + +### Phase 3: documentation and integration + +- update tool guidance so agents prefer declared launchers over ad hoc activation +- update templates and built-in instructions +- decide how launcher paths should be surfaced in role and skill discovery output + +## Open Questions + +- What manifest format is the right balance between readability and strictness? +- Should tool declarations remain optional, or eventually become the preferred way to expose tools? +- Should `myteam` generate POSIX shell launchers, Python launchers, or both? +- How should Windows support be handled if launcher generation becomes part of the public contract? +- Should `.env` loading support interpolation, or only literal key/value parsing? +- Is there any acceptable form of shell-based env activation that still preserves approval intent? diff --git a/src/governing_docs/backlog/list-skills-builtin-filter.md b/src/governing_docs/backlog/list-skills-builtin-filter.md new file mode 100644 index 0000000..ab0cea4 --- /dev/null +++ b/src/governing_docs/backlog/list-skills-builtin-filter.md @@ -0,0 +1,37 @@ +# `list_skills` Built-In Filter + +## Summary + +`list_skills` should take a boolean parameter that controls whether built-in skills are included in +its output. + +## Problem + +Built-in skills such as packaged maintenance or upgrade helpers are not always appropriate to show +in ordinary skill listings. + +Right now, callers that want different behavior risk: + +- always showing built-in skills, even when they are just support scaffolding +- or hard-coding special-case filtering logic outside `list_skills` + +That makes skill discovery harder to control consistently. + +## Proposed Change + +Add a boolean parameter to `list_skills` so the caller can choose whether built-in skills are +listed. + +This should let callers: + +- hide built-in skills in normal project skill listings +- include built-in skills in contexts where upgrade or maintenance discovery matters +- keep the filtering decision in one place instead of scattering it across loaders + +## Design Questions + +- Should built-in skills be included by default or excluded by default? +- Should the parameter mean `include_builtins` or `exclude_builtins`? +- How should `list_skills` determine that a skill is built-in: path convention, metadata, or some + other marker? +- Does `list_roles` need a matching option, or is this only a skill-listing problem? diff --git a/src/governing_docs/backlog/third-party-skill-trees.md b/src/governing_docs/backlog/third-party-skill-trees.md new file mode 100644 index 0000000..2705998 --- /dev/null +++ b/src/governing_docs/backlog/third-party-skill-trees.md @@ -0,0 +1,280 @@ +# Third-Party Skill Tree Provider Design + +## Summary + +Allow installed third-party Python packages to expose entire `myteam` skill trees through a +provider interface, while keeping `myteam get skill` local-only and deterministic at runtime. + +The key design move is to replace the current special-case `builtins/` resolver with a general +"skill provider" registry. Each provider owns one or more top-level skill namespaces and maps them +to a local filesystem tree containing normal `skill.md` and `load.py` nodes. + +This preserves the current loading model: + +- `myteam get skill ...` still resolves to a local directory +- `load.py` is still the execution boundary +- discovery still comes from the files under the resolved node + +What changes is only how the root of a skill path is resolved. + +## Problem + +Today `myteam` supports two skill sources: + +- project-local skills under `.myteam/` +- packaged built-in skills under the reserved `builtins/` namespace + +That works because `builtins/` is hard-coded as a second filesystem root. It does not scale to +third parties: + +- importing a third-party module inside `load.py` is enough for one skill, but not for a full tree +- nested children need stable path resolution and discovery, not ad hoc delegation code +- multiple packages need conflict handling for namespace ownership +- the application interface currently describes only project-local skills plus `builtins/` + +If we solve this by letting arbitrary `load.py` files reach out to package internals dynamically, we +lose the simplicity of "a skill path resolves to a directory tree and then executes `load.py` there." + +## Goals + +- Support whole third-party skill trees, not just one-off imported skills. +- Keep `myteam get skill` local-only and filesystem-based at runtime. +- Preserve the existing mental model that a skill is a directory with `skill.md` and `load.py`. +- Let third-party packages ship and version their own skill content. +- Allow multiple providers to coexist without ambiguous path resolution. +- Keep the current built-in `builtins/` behavior as one instance of the same mechanism. + +## Non-Goals + +- Remote fetching during `get skill`. +- Merging project-local and provider-owned content into one mixed directory view. +- Letting two providers contribute children under the same exact namespace. +- Replacing normal on-disk skill trees with an API-only virtual tree. + +## Proposed Model + +### Skill providers + +A skill provider is a local source of `myteam` skill trees. A provider declares: + +- provider name +- owning Python package or module +- one or more top-level namespaces it owns +- filesystem root for its skill tree +- optional metadata such as version or provenance text + +Each provider root must contain normal `myteam` skill directories below it. + +Example: + +- built-in provider owns `builtins` +- package `acme_toolkit` owns `acme` +- package `pandas_agent_tools` owns `pandas` + +Then: + +- `myteam get skill builtins/changelog` resolves into the built-in packaged tree +- `myteam get skill acme/sql/debugging` resolves into the installed `acme_toolkit` tree + +### Namespace ownership + +Each top-level namespace has exactly one owner. + +Resolution order should be: + +1. provider-owned namespace if the first path segment matches a registered provider namespace +2. otherwise project-local `.myteam/` + +This keeps third-party paths explicit and avoids surprising project overrides of shipped provider +content. + +That is the same rule already used for `builtins/`: a reserved namespace resolves away from +`.myteam/`. + +### Filesystem requirement + +Providers should expose a real directory on local disk, not only Python objects. + +That directory is the canonical tree for: + +- existence checks +- listing child skills +- reading `skill.md` +- executing `load.py` + +This keeps existing utilities, templates, and tests mostly valid. + +## Provider Registration + +Use Python package entry points rather than import-by-convention from arbitrary `load.py` code. + +Proposed entry point group: + +- `myteam.skill_providers` + +Each entry point returns provider metadata. For example, a package could expose: + +- namespaces: `["acme"]` +- root: `/.../site-packages/acme_toolkit/myteam_skills` + +The provider object should be intentionally small. It only needs to answer: + +- what namespaces do you own? +- where is the local skill-tree root for each namespace? + +This is enough for path resolution and keeps provider loading simple. + +### Why entry points + +Entry points are better than "teach users to import a package from `load.py`" because they: + +- scale from one skill to full trees +- allow discovery without custom glue code in project files +- make namespace conflicts detectable before skill execution +- fit installed third-party libraries naturally + +## Interface Sketch + +The exact Python API can change, but the CLI-facing contract should look like this: + +- `commands.get_skill()` asks a resolver for the base directory for a skill path +- the resolver returns `(source_kind, root_dir, logical_path)` or a similar structure +- `commands.get_skill()` joins the path under that root, validates `is_skill_dir`, and executes + `load.py` + +Near-term internal interface: + +```python +@dataclass(frozen=True) +class SkillProvider: + name: str + namespaces: tuple[str, ...] + + def root_for_namespace(self, namespace: str) -> Path: ... +``` + +Resolver helpers: + +```python +def iter_skill_providers() -> Iterable[SkillProvider]: ... +def resolve_skill_root(skill_path: str) -> tuple[str, Path, str]: ... +def provider_for_namespace(namespace: str) -> SkillProvider | None: ... +``` + +The built-in tree should be migrated onto this interface instead of remaining a special case. + +## Discovery Behavior + +Discovery should remain local to the resolved node. + +Once a provider-owned path resolves to a real directory, the existing listing functions can operate +normally on that subtree. + +At the root role level, `myteam` may also want to expose provider namespaces as discoverable entry +points, similar to how `builtins` is surfaced today. That should be treated as a separate UX +decision from path resolution itself. + +Recommended near-term behavior: + +- keep root listing of packaged `builtins` +- do not automatically list every installed third-party namespace from the root role yet + +Reason: dumping all installed provider namespaces into every project's discovery output may be noisy +and may expose tools the project author never intended to advertise. Third-party resolution can +exist first; root-level discoverability can be added later behind explicit project authoring. + +## Conflict Policy + +Namespace conflicts must fail clearly. + +Cases: + +- provider namespace collides with another provider namespace +- provider namespace collides with reserved internal names + +Recommended rule: + +- `builtins` remains reserved for the packaged built-in provider +- duplicate provider namespace registration raises a deterministic error +- project-local `.myteam/` does not override a provider-owned namespace + +This is stricter than Python import shadowing, and that is good here because instruction loading +needs predictability. + +## Packaging Guidance For Third Parties + +A third-party package that wants to ship skills should include a normal directory tree in package +data, for example: + +```text +acme_toolkit/ + myteam_skills/ + acme/ + skill.md + load.py + sql/ + skill.md + load.py +``` + +The provider root for namespace `acme` would be the parent directory containing `acme/`, not the +`acme/` directory itself, so path joining stays uniform with the built-in resolver model. + +This gives library authors a low-friction authoring story: they ship ordinary `myteam` skill files. + +## Security and Trust Notes + +This feature does not create a new execution boundary. `load.py` from installed packages is already +code execution, just as project-local `load.py` is. + +However, it does widen the set of local code that may be loaded through `myteam get skill`. That +means: + +- provider namespaces should be explicit +- conflict errors should be loud +- future trust/provenance work should include provider packages as another instruction source class + +This should stay out of scope for the initial provider implementation, but the design should leave +room for a later `myteam list providers` or trust-reporting command. + +## Implementation Plan + +### Phase 1: internal resolver cleanup + +- Introduce a general skill-provider resolver abstraction. +- Reimplement `builtins/` on top of that abstraction. +- Replace direct `builtins` path branching in `commands.py` and helper code. + +### Phase 2: packaged provider loading + +- Load third-party providers from Python entry points. +- Validate namespace uniqueness. +- Resolve provider-owned skill paths through the shared resolver. + +### Phase 3: user-facing polish + +- Document provider-owned namespaces in the README and application interface. +- Decide whether and how provider namespaces appear in discovery listings. +- Consider diagnostics such as listing installed providers and their owned namespaces. + +## Code Impact Areas + +- `src/myteam/commands.py` + `get_skill()` should stop hard-coding `builtins`. +- `src/myteam/utils.py` + builtin-specific helpers should be generalized into provider resolution helpers. +- `src/myteam/paths.py` + likely keep only built-in path helpers that the built-in provider itself uses. +- tests + add provider registration and namespace-conflict cases. +- docs + update README and `application_interface.md` once the design is implemented. + +## Open Questions + +- Should provider namespaces be discoverable only when a project explicitly opts in? +- Do we want a command to list installed skill providers for debugging? +- Should provider loading ignore broken entry points and continue, or fail fast? +- Should a provider be allowed to expose multiple namespaces, or should we require one provider per + namespace? +- Should role trees eventually gain the same provider mechanism, or should this remain skill-only? From d9170dd9bd9a68752c496f9a17e32b9b02052407 Mon Sep 17 00:00:00 2001 From: Gordon Bean Date: Fri, 27 Mar 2026 16:45:52 -0600 Subject: [PATCH 02/14] changes to feature-pipiline skill.md. wip --- .myteam/feature-pipeline/skill.md | 35 ++++++++++++++++++------------- 1 file changed, 20 insertions(+), 15 deletions(-) diff --git a/.myteam/feature-pipeline/skill.md b/.myteam/feature-pipeline/skill.md index d3e3a8b..5af9ac1 100644 --- a/.myteam/feature-pipeline/skill.md +++ b/.myteam/feature-pipeline/skill.md @@ -29,9 +29,18 @@ to name the branch, get enough details to select a reasonable name. The name does NOT need to be perfect; as long as it is loosely relevant, it will work great. -### Plan the feature +### Understand the feature and update the interface document -The goal of this step is to thoroughly understand the requested feature. +The goal of this step is to thoroughly understand the requested feature +and document how the primary interface of the application should be changed. + +First read `src/governing_docs/application_interface.md` to understand +the current design and intent of the project. +This document describes what this app should do, how it behaves, etc. +It is the black-box description of the user's experience with the application. + +Then seek to understand what the user wants to change. +Is it a new behavior? Modifying an existing behavior? A bugfix? Questions that might be relevant: @@ -43,19 +52,6 @@ Questions that might be relevant: - Is there documentation via skills or in the repo that suggests a strategy? - Does the user have an opinion on which strategy is used? -Prepare a document in `src/governing_docs/feature_plans/.md` -that describes the specific details and strategies decided on for the feature. - -Get approval from the user on this document before continuing. - -When this step is complete, commit your changes before moving on. - -### Update the interface document - -`src/governing_docs/application_interface.md` describes -what this app should do, how it behaves, etc. -It is the black-box description of the user's experience with the application. - Based on the details in the feature plan document, determine how the user interface of the application will change. @@ -66,6 +62,15 @@ before you continue. When this step is complete, commit your changes before moving on. +### Design the feature + +Prepare a document in `src/governing_docs/feature_plans/.md` +that describes the specific details and strategies decided on for the feature. + +Get approval from the user on this document before continuing. + +When this step is complete, commit your changes before moving on. + ### Refactor the framework Load the `framework-oriented-design` skill. From d488d6225fb6411bdfc3b7d319e6016c98aa9b6e Mon Sep 17 00:00:00 2001 From: gbean Date: Sat, 28 Mar 2026 16:38:27 -0600 Subject: [PATCH 03/14] refinements to feature-pipeline --- .../code-linter/load.py | 0 .../code-linter/role.md | 0 .../framework-oriented-design/skill.md | 5 +- .../project-myteam-update/load.py | 0 .../project-myteam-update/role.md | 0 .myteam/feature-pipeline/skill.md | 102 ++++++++++++------ .myteam/role.md | 1 + 7 files changed, 74 insertions(+), 34 deletions(-) rename .myteam/{ => feature-pipeline}/code-linter/load.py (100%) rename .myteam/{ => feature-pipeline}/code-linter/role.md (100%) rename .myteam/{ => feature-pipeline}/project-myteam-update/load.py (100%) rename .myteam/{ => feature-pipeline}/project-myteam-update/role.md (100%) diff --git a/.myteam/code-linter/load.py b/.myteam/feature-pipeline/code-linter/load.py similarity index 100% rename from .myteam/code-linter/load.py rename to .myteam/feature-pipeline/code-linter/load.py diff --git a/.myteam/code-linter/role.md b/.myteam/feature-pipeline/code-linter/role.md similarity index 100% rename from .myteam/code-linter/role.md rename to .myteam/feature-pipeline/code-linter/role.md diff --git a/.myteam/feature-pipeline/framework-oriented-design/skill.md b/.myteam/feature-pipeline/framework-oriented-design/skill.md index ced08c2..c6e5620 100644 --- a/.myteam/feature-pipeline/framework-oriented-design/skill.md +++ b/.myteam/feature-pipeline/framework-oriented-design/skill.md @@ -13,6 +13,9 @@ An application is a combination of framework and business logic. We seek to separate framework code for business logic code. +*Framework* refers to the internal, helper code that supports the primary API of the application, +as well as the conventions and patterns used in the codebase to create consistency and structure. + When preparing to implement a feature, understand the existing framework: - Why was the code written the way it does? What problems does the current design solve? @@ -26,9 +29,9 @@ followed by a change to the business logic. If a change to the framework is needed, refactor the code accordingly without adding any new behavior. + Guidance: -- - Review the principles of self-documenting code and follow them. - Functions should be simple, focused, and easy to read. - When creating helper functions, look for existing behavior. \ No newline at end of file diff --git a/.myteam/project-myteam-update/load.py b/.myteam/feature-pipeline/project-myteam-update/load.py similarity index 100% rename from .myteam/project-myteam-update/load.py rename to .myteam/feature-pipeline/project-myteam-update/load.py diff --git a/.myteam/project-myteam-update/role.md b/.myteam/feature-pipeline/project-myteam-update/role.md similarity index 100% rename from .myteam/project-myteam-update/role.md rename to .myteam/feature-pipeline/project-myteam-update/role.md diff --git a/.myteam/feature-pipeline/skill.md b/.myteam/feature-pipeline/skill.md index 5af9ac1..0bc0745 100644 --- a/.myteam/feature-pipeline/skill.md +++ b/.myteam/feature-pipeline/skill.md @@ -7,27 +7,13 @@ description: | ## Feature Pipeline -Carefully follow each of these steps in order. -Create a plan that has checkboxes for each of these steps. - -When using the term "feature", we mean any change to the code or assets, -not just new additions to the codebase. +Carefully follow each of these steps in order. Do not proceed to a later step until the earlier step is finished. +We're not worried about multitasking and efficiency; we care about process and quality. ### Create the git branch Check the current branch. -If you are on `main`, create a new branch for the feature. -The branch name should be simple but descriptive. - -If you are on a different branch (not `main`), -confirm with the user whether this branch should be used for the feature, -or whether a new branch should be created. - -If the user's description of the feature is too vague -to name the branch, get enough details to select a reasonable name. - -The name does NOT need to be perfect; as long as it -is loosely relevant, it will work great. +If you are on `main`, remind the user to start a new branch and wait for them to do so before proceeding. ### Understand the feature and update the interface document @@ -44,18 +30,10 @@ Is it a new behavior? Modifying an existing behavior? A bugfix? Questions that might be relevant: -- What change in behavior does the user hope for? - - What behaviors should NOT change? -- How might this change be implemented? - - If there are multiple reasonable strategies, what distinguishes them? - - Are there dependencies that may change? - - Is there documentation via skills or in the repo that suggests a strategy? - - Does the user have an opinion on which strategy is used? +- What changes in behavior does the user hope for? +- What behaviors should NOT change? -Based on the details in the feature plan document, determine how the -user interface of the application will change. - -Update the `application_interface.md` document to reflect the new feature. +Update the `application_interface.md` document to reflect the changes. Review these changes with the user. Make sure you are both on the same page before you continue. @@ -64,21 +42,70 @@ When this step is complete, commit your changes before moving on. ### Design the feature -Prepare a document in `src/governing_docs/feature_plans/.md` +The goal of this step is to understand how the feature will be implemented. +It is NOT to implement that changes. That will come later. + +#### Understand the context + +Load the `framework-oriented-design` skill. + +Then look through the code. Understand the framework and infrastructure in place that supports the current application. +Notice the intentional design decisions and articulate the reasoning for those decisions. + +#### Plan the feature + +Now, consider how this feature could be implemented. + +Is the existing framework sufficient to support this new feature? +If not, how should the framework be modified to naturally support the feature? + +Implementing a feature has two phases: 1) updating the framework, and 2) sliding the new feature into place. +If the framework is right, the new feature will be simple to implement. +So, make sure we understand how the framework is going to change to accommodate the feature. + +Think through how the framework changes will make the feature implementation simple. +As necessary, iterate on this process until you have a simple refactor that supports a simple feature implementation. + +If changes to the framework are needed, consider: + +- If there are multiple reasonable strategies, what distinguishes them? +- Are there dependencies that may change? +- Is there documentation via skills or in the repo that suggests a strategy? +- Does the user have an opinion on which strategy is used? + +Be specific. This is the stage of the process where you figure out all the details. +Do not leave any decisions for later. + +Think critically about the changes. Is there a simpler way? +Simplicity is SO important to maintaining a codebase. Be very skeptical of new complexity. + +Think also about consistency. Is there a style or pattern already used in the codebase that could be followed? + +Prepare a document named `src/governing_docs/feature_plans/.md` that describes the specific details and strategies decided on for the feature. +This document should have two main parts: + +1) Framework refactor: here describe the feature-neutral refactorings to the existing code that prepare the code for the new feature + - The existing test suite should not need to change in response to this step + - If changes are needed because the framework has changed, and thus the testing infrastructure must be modified, that's fine +2) Feature addition: here describe the code needed to introduce the new feature + Get approval from the user on this document before continuing. When this step is complete, commit your changes before moving on. ### Refactor the framework -Load the `framework-oriented-design` skill. - -Following its guidance, make any necessary changes to the application framework. +Following the feature plan part 1 guidance, make any necessary changes to the application framework. The existing tests should all still pass. +Describe to the user the changes that were made and why they were made. +Explain how these changes will make adding the feature a simple process. + +Get approval from the user on these changes before continuing. + When this step is complete, commit your changes before moving on. ### Update the test suite @@ -93,15 +120,24 @@ Make these changes to the test suite. Review the changes one more time: do they faithfully capture the new interface design? Make changes as needed. +Explain to the user how the new tests address the changes made to the interface document. +Get their approval before continuing. + When this step is complete, commit your changes before moving on. ### Implement the feature -Now that the framework has been updated (if necessary) and the tests are in place, +Now that the framework has been updated (as necessary) and the tests are in place, implement the feature. +Follow the guidance in part 2 of the feature document. + +Use the existing framework to support the feature. + The tests should pass. +Review the changes made with the user. Get their approval before continuing. + When this step is complete, commit your changes before moving on. ### Concluding the feature diff --git a/.myteam/role.md b/.myteam/role.md index 90157ce..dfb0d31 100644 --- a/.myteam/role.md +++ b/.myteam/role.md @@ -7,3 +7,4 @@ If there is a skill that sounds like it might apply, you MUST load it. DO NOT assume ANYTHING is simple enough to justify ignoring a skill or role. +According to the request of the user, load the appropriate skill and proceed. From abf721f0fde0823a3e5166560e9ce5942c153dda Mon Sep 17 00:00:00 2001 From: gbean Date: Sat, 28 Mar 2026 17:04:59 -0600 Subject: [PATCH 04/14] refinements to feature-pipeline --- .myteam/asking-questions/load.py | 21 +++++++++++++++++++++ .myteam/asking-questions/skill.md | 22 ++++++++++++++++++++++ .myteam/feature-pipeline/skill.md | 5 ++++- 3 files changed, 47 insertions(+), 1 deletion(-) create mode 100644 .myteam/asking-questions/load.py create mode 100644 .myteam/asking-questions/skill.md diff --git a/.myteam/asking-questions/load.py b/.myteam/asking-questions/load.py new file mode 100644 index 0000000..557155a --- /dev/null +++ b/.myteam/asking-questions/load.py @@ -0,0 +1,21 @@ +#!/usr/bin/env python3 +from __future__ import annotations + +from pathlib import Path + +from myteam.utils import get_active_myteam_root, list_roles, list_skills, list_tools, print_instructions + + +def main() -> int: + base = Path(__file__).resolve().parent # .myteam/ + print_instructions(base) + myteam = get_active_myteam_root(base) + list_roles(base, myteam, []) + list_skills(base, myteam, []) + list_tools(base, myteam, []) + + return 0 + + +if __name__ == "__main__": + raise SystemExit(main()) diff --git a/.myteam/asking-questions/skill.md b/.myteam/asking-questions/skill.md new file mode 100644 index 0000000..3fbdb9d --- /dev/null +++ b/.myteam/asking-questions/skill.md @@ -0,0 +1,22 @@ +--- +name: Asking Questions +description: | + This skill described the process to follow when asking the user questions. + If you will ask the user questions in any way, load this skill first. +--- + +## Asking Questions + +When asking questions, always ask the questions one-at-a-time. + +List out the questions you want to ask in a plan. + +Then go through that plan one question at a time. +This lets the user discuss the question before providing an answer. + +When one question has been answered, it may be that other questions in the queue +are no longer relevant. Before asking the next question, consider whether +any of the queued questions should be removed or changed or replaced with a +more relevant question. + +When all your questions are answered, continue. diff --git a/.myteam/feature-pipeline/skill.md b/.myteam/feature-pipeline/skill.md index 0bc0745..abb9bd8 100644 --- a/.myteam/feature-pipeline/skill.md +++ b/.myteam/feature-pipeline/skill.md @@ -28,12 +28,15 @@ It is the black-box description of the user's experience with the application. Then seek to understand what the user wants to change. Is it a new behavior? Modifying an existing behavior? A bugfix? +Discuss these things with the user. Involve them in the process. + Questions that might be relevant: - What changes in behavior does the user hope for? - What behaviors should NOT change? -Update the `application_interface.md` document to reflect the changes. +Once you have a thorough understanding of the user's intent, +update the `application_interface.md` document to reflect the changes. Review these changes with the user. Make sure you are both on the same page before you continue. From 06d0a9208cfc9f29ce1ece2be1cf4d5f9a795abc Mon Sep 17 00:00:00 2001 From: gbean Date: Sat, 28 Mar 2026 17:26:16 -0600 Subject: [PATCH 05/14] Define download tracking interface --- src/governing_docs/application_interface.md | 23 +++++-- .../backlog/download-checksum.md | 64 +++++++++++++++++++ src/governing_docs/backlog/download.md | 8 ++- 3 files changed, 88 insertions(+), 7 deletions(-) create mode 100644 src/governing_docs/backlog/download-checksum.md diff --git a/src/governing_docs/application_interface.md b/src/governing_docs/application_interface.md index c915a61..f564698 100644 --- a/src/governing_docs/application_interface.md +++ b/src/governing_docs/application_interface.md @@ -222,27 +222,38 @@ Failure conditions that matter at the interface: ### `myteam download ` -Downloads a named roster from a remote repository. +Downloads a named roster folder from a remote repository as a managed local install. Inputs: -- `` identifies the roster entry to download. +- `` identifies the remote roster folder to download. - The command also supports an optional destination and alternate repository through its CLI wiring. +- If no destination is provided, the roster path is installed under `.myteam/` using the same relative + folder path as the remote roster. Expected outcome on success: -- Downloads the requested roster content from the configured repository. -- Writes the downloaded files into `.myteam/` by default. -- Creates destination directories as needed. +- Downloads the requested roster folder content from the configured repository. +- Creates one managed local folder for that install. +- Writes a `.source.yml` provenance file at the root of the managed local folder. +- Writes downloaded files inside that managed local folder while preserving their relative paths within + the roster. - Prints progress while downloading. User-visible result: -- The downloaded roster becomes available on disk in the destination directory, ready to be loaded or edited. +- The downloaded roster becomes available on disk as a managed folder, ready to be loaded or edited. +- The managed folder records enough source information for later provenance-aware commands. Failure conditions that matter at the interface: - If the roster name does not exist in the repository, the command exits with an error and reports available roster names. +- If the requested roster resolves to a single file instead of a folder, the command exits with an error. +- If the destination already contains the same managed source, the command exits with an error that + tells the caller to run `myteam update ` instead of using `download` again. +- If unrelated content already exists at the destination path, the command exits with an error that + explains the content is not the same managed source and tells the caller to delete it or choose a + different destination instead of merging. - If the remote metadata or file downloads fail, the command exits with an error. ### `myteam --version` diff --git a/src/governing_docs/backlog/download-checksum.md b/src/governing_docs/backlog/download-checksum.md new file mode 100644 index 0000000..e41a5ee --- /dev/null +++ b/src/governing_docs/backlog/download-checksum.md @@ -0,0 +1,64 @@ +# Download Checksum Tracking + +## Summary + +Extend managed download metadata so each installed subtree records a checksum of the original +downloaded content, then use that checksum in future update flows to detect local modifications before +overwriting managed files. + +This work is intentionally separate from the initial download-tracking feature. The first feature only +records origin metadata in `.source.yml`; checksum recording and checksum-based decisions belong to +this later feature. + +## Goals + +- Detect whether a managed downloaded subtree still matches the content originally installed. +- Let future `myteam update` behavior distinguish "same source, unchanged local content" from + "same source, locally modified content." +- Reuse the existing migration-style safety model: block risky replacement by default and direct the + caller through an explicit next step instead of silently overwriting. + +## Proposed Behavior + +### Metadata + +`.source.yml` should include a checksum that represents the original downloaded content for the managed +subtree. + +The checksum should be stable across platforms and based on the downloaded file set and their contents, +not incidental filesystem metadata. + +### `myteam update` + +Before replacing files in a managed subtree, `update` should compare the current local subtree against +the original checksum recorded in `.source.yml`. + +Outcomes: + +- if the current local checksum matches the recorded original checksum, the subtree is clean and may be + updated normally +- if the checksum differs, `update` should treat the subtree as locally modified and stop before + overwriting it +- the user-facing failure should explain that the managed content has local changes and that a separate + guided resolution flow is required + +## Scope Boundaries + +- This document does not define the final merge or conflict-resolution interface for dirty managed + trees. +- This document does not define trust verification for downloaded content. +- This document does not require `download` itself to compare current content against the checksum; + that behavior belongs to the future update workflow. + +## Implementation Notes + +- The checksum should be computed from a canonical traversal of the managed subtree. +- The checksum format should be straightforward to recompute during future update and migration flows. +- The eventual dirty-content handling should align with the existing migration philosophy: clear + explanation, no silent overwrite, explicit operator action required. + +## Open Follow-Up Work + +- Design the exact checksum algorithm and canonical file-ordering rules. +- Define the user workflow for resolving dirty managed subtrees during `myteam update`. +- Decide whether a future force option should exist, and if so, how it should be gated. diff --git a/src/governing_docs/backlog/download.md b/src/governing_docs/backlog/download.md index d0782a6..73f9fdf 100644 --- a/src/governing_docs/backlog/download.md +++ b/src/governing_docs/backlog/download.md @@ -19,7 +19,7 @@ This preserves the current `get role` / `get skill` model and avoids turning rol Treat `download` as an install operation instead of a one-shot copy. -When content is downloaded, write a hidden metadata file at the root of the installed subtree. A placeholder name like `.myteam-source.yml` is sufficient for design purposes. +When content is downloaded, write a hidden metadata file named `.source.yml` at the root of the installed subtree. The metadata should record: @@ -30,6 +30,12 @@ The metadata should record: - download timestamp - optional remote tree SHA or similar fingerprint +If `download` targets a local folder that already exists and `.source.yml` says it came from the same +source, `download` should not overwrite it. Instead, it should direct the caller to `myteam update `. + +If the existing destination contains unrelated content, `download` should fail and tell the caller to +delete the destination or choose a different local path. + ### `myteam update [path]` Add an `update` command that uses stored metadata to re-download installed remote content. From 6d4aec2d7d2420105e22a07516d1c7cbb142491f Mon Sep 17 00:00:00 2001 From: gbean Date: Sat, 28 Mar 2026 19:12:41 -0600 Subject: [PATCH 06/14] Refactor roster download flow --- .../feature_plans/download-track.md | 106 ++++++++++++++++++ src/myteam/rosters.py | 62 ++++++---- 2 files changed, 149 insertions(+), 19 deletions(-) create mode 100644 src/governing_docs/feature_plans/download-track.md diff --git a/src/governing_docs/feature_plans/download-track.md b/src/governing_docs/feature_plans/download-track.md new file mode 100644 index 0000000..aa9a3db --- /dev/null +++ b/src/governing_docs/feature_plans/download-track.md @@ -0,0 +1,106 @@ +# Download Tracking Feature Plan + +## Framework Refactor + +### Current design + +`src/myteam/rosters.py` currently mixes four concerns inside one flow: + +- locating a roster entry in the remote repository +- deciding whether that entry is a tree or blob +- mapping the remote entry onto a local destination path +- downloading files directly into that destination + +That structure kept the initial implementation short, but it makes the new managed-install behavior +awkward because the local install root, overwrite checks, and metadata writing all need to happen as +one coherent operation. + +### Refactor goal + +Refactor the roster download code around an explicit managed install root. + +The refactor should separate: + +- remote roster resolution +- local install-path resolution +- destination validation +- file download into a managed root +- provenance metadata writing + +### Planned refactor + +1. Introduce helpers that compute the managed local root for a roster download. + - Default installs should preserve the remote roster path under `.myteam/`. + - Explicit destinations should be treated as the managed local root. +2. Introduce a helper that validates the roster entry type before download begins. + - Tree rosters remain supported. + - Blob rosters fail immediately with a clear error. +3. Introduce a helper that validates the target path before any files are written. + - If the target path does not exist, the install may proceed. + - If the target path exists and contains a matching `.source.yml`, fail with guidance to run + `myteam update `. + - If the target path exists without matching managed-source metadata, fail with guidance to delete + the destination or choose a different path. +4. Introduce a helper that writes `.source.yml` from structured metadata after file download succeeds. + - Use the existing YAML dependency already present in the project. +5. Keep the network fetch primitive centered on the existing `_fetch_json` and `_download_file` + helpers so the refactor stays local to roster install orchestration. + +### Why this framework change is sufficient + +Once install-root resolution and destination validation are explicit, the feature itself becomes +simple: + +- download tree files into one managed root +- write `.source.yml` +- reject unsupported or conflicting targets before any write occurs + +That keeps the business behavior small and avoids spreading provenance logic into unrelated command +code. + +## Feature Addition + +### Behavior to implement + +Implement managed folder downloads for `myteam download` with the following behavior: + +1. `myteam download ` + - resolves `` as a folder roster only + - installs it under `.myteam//` + - writes `.source.yml` at `.myteam//.source.yml` +2. `myteam download ` + - resolves `` as a folder roster only + - installs the roster contents into the explicit destination path under `.myteam/` + - writes `.source.yml` at the explicit destination root +3. If the remote roster is a blob, fail with a clear folder-only error. +4. If the destination already exists and belongs to the same recorded source, fail with guidance to + run `myteam update `. +5. If the destination already exists and is unrelated content, fail with guidance to delete it or + choose a different destination. + +### `.source.yml` contents for this feature + +This feature should record source-tracking metadata only. Checksum fields are intentionally deferred to + backlog work. + +The metadata should include: + +- source repo identifier +- source roster path +- source ref used for download +- local install path +- download timestamp + +### Test updates anticipated by this plan + +`tests/test_download_flow.py` should be updated to verify: + +- default tree installs preserve the roster folder path under `.myteam/` +- explicit destination installs write into the provided managed root +- `.source.yml` is created at the managed root +- blob rosters fail with the new folder-only error +- same-source existing installs fail with `myteam update ` guidance +- unrelated existing destinations fail with delete-or-relocate guidance + +The tests should continue using the high-level CLI harness and monkeypatched roster fetch/download +helpers so behavior is asserted through command outcomes and final filesystem state. diff --git a/src/myteam/rosters.py b/src/myteam/rosters.py index 419e54b..06d5bb4 100644 --- a/src/myteam/rosters.py +++ b/src/myteam/rosters.py @@ -4,6 +4,7 @@ import json import sys import urllib.request +from collections.abc import Iterable from pathlib import Path APP_NAME = "myteam" @@ -15,6 +16,12 @@ def _agents_root(base: Path) -> Path: return base / AGENTS_DIRNAME +def _download_destination(base: Path, destination: Path | str | None) -> Path: + if destination is None: + return _agents_root(base) + return Path(destination) + + def _repo_urls(repo: str) -> tuple[str, str]: repo_path = repo.strip().strip("/") if repo_path.count("/") != 1: @@ -52,7 +59,7 @@ def _fetch_available_rosters(roster_repository_url: str): return root_tree.get("tree", []) -def _fetch_roster_tree(roster: str, roster_repository_url: str): +def _fetch_roster_entry(roster: str, roster_repository_url: str) -> dict: roster_trees = _fetch_available_rosters(roster_repository_url) roster_tree = next( (entry for entry in roster_trees if entry.get("path") == roster), @@ -83,42 +90,59 @@ def _fetch_tree_files(roster_tree, roster_repository_url: str): return file_entries +def _blob_destination(blob_object: dict, destination: Path) -> Path: + file_name = blob_object.get("path", "").split("/")[-1] + return destination / file_name + + def _download_blob(blob_object: dict, destination: Path, roster_raw_base_url: str): - file_name = blob_object.get('path').split("/")[-1] + output_path = _blob_destination(blob_object, destination) + file_name = output_path.name print(f"\rDownloading {file_name}") - _download_file(f"{roster_raw_base_url}/{blob_object.get('path')}", destination / file_name) + _download_file(f"{roster_raw_base_url}/{blob_object.get('path')}", output_path) + + +def _tree_file_url(roster_dir_name: str, entry: dict, roster_raw_base_url: str) -> str: + return f"{roster_raw_base_url}/{roster_dir_name}/{entry.get('path')}" -def _download_tree_files(file_entries, roster_dir_name: str, destination: Path, roster_raw_base_url: str): +def _tree_file_destination(entry: dict, destination: Path) -> Path | None: + rel_path = entry.get("path") + if not rel_path: + return None + return destination / rel_path + + +def _download_tree_files(file_entries: Iterable[dict], roster_dir_name: str, destination: Path, roster_raw_base_url: str): + file_entries = list(file_entries) total = len(file_entries) for idx, entry in enumerate(file_entries, start=1): - rel_path = entry.get("path") - if not rel_path: + output_path = _tree_file_destination(entry, destination) + if output_path is None: continue - raw_url = f"{roster_raw_base_url}/{roster_dir_name}/{rel_path}" print(f"\rDownloading {roster_dir_name} {idx}/{total}", end="", file=sys.stderr) - _download_file(raw_url, destination / rel_path) + _download_file(_tree_file_url(roster_dir_name, entry, roster_raw_base_url), output_path) print("", file=sys.stderr) +def _download_roster_entry(roster_entry: dict, roster_name: str, destination: Path, roster_repository_url: str, roster_raw_base_url: str): + if roster_entry.get("type") == "blob": + _download_blob(roster_entry, destination, roster_raw_base_url) + return + tree_files = _fetch_tree_files(roster_entry, roster_repository_url) + _download_tree_files(tree_files, roster_name, destination, roster_raw_base_url) + + def download_roster( roster_dir_name: str, destination: Path | str | None = None, repo: str = DEFAULT_REPO, ): base = Path.cwd() - if destination is None: - destination = _agents_root(base) - else: - destination = Path(destination) - + destination = _download_destination(base, destination) roster_repository_url, roster_raw_base_url = _repo_urls(repo) - roster_tree = _fetch_roster_tree(roster_dir_name, roster_repository_url) - if roster_tree.get('type') == 'blob': - _download_blob(roster_tree, destination, roster_raw_base_url) - else: - tree_files = _fetch_tree_files(roster_tree, roster_repository_url) - _download_tree_files(tree_files, roster_dir_name, destination, roster_raw_base_url) + roster_entry = _fetch_roster_entry(roster_dir_name, roster_repository_url) + _download_roster_entry(roster_entry, roster_dir_name, destination, roster_repository_url, roster_raw_base_url) def list_available_rosters(repo: str = DEFAULT_REPO): From 1d3513cdeca4f677acce036a29282e21a57d78b3 Mon Sep 17 00:00:00 2001 From: gbean Date: Sat, 28 Mar 2026 19:14:46 -0600 Subject: [PATCH 07/14] Update download tests for managed installs --- tests/test_download_flow.py | 76 +++++++++++++++++++++++++++++++++---- 1 file changed, 69 insertions(+), 7 deletions(-) diff --git a/tests/test_download_flow.py b/tests/test_download_flow.py index f3cc7f8..76f9d1d 100644 --- a/tests/test_download_flow.py +++ b/tests/test_download_flow.py @@ -1,5 +1,6 @@ from __future__ import annotations +import yaml from pathlib import Path import pytest @@ -46,24 +47,85 @@ def fake_download(url: str, output_path: Path): result = run_myteam_inprocess(initialized_project, "download", "starter") assert result.exit_code == 0 - assert (initialized_project / ".myteam" / "role.md").exists() - assert (initialized_project / ".myteam" / "nested" / "skill.md").exists() + managed_root = initialized_project / ".myteam" / "starter" + assert (managed_root / "role.md").exists() + assert (managed_root / "nested" / "skill.md").exists() + metadata = yaml.safe_load((managed_root / ".source.yml").read_text(encoding="utf-8")) + assert metadata["repo"] == rosters.DEFAULT_REPO + assert metadata["roster"] == "starter" assert downloaded -def test_download_single_file_roster_writes_file(run_myteam_inprocess, initialized_project: Path, monkeypatch: pytest.MonkeyPatch): - monkeypatch.setattr(rosters, "_fetch_available_rosters", lambda _repo_url: [{"path": "starter.md", "type": "blob"}]) +def test_download_tree_roster_writes_to_explicit_destination( + run_myteam_inprocess, + initialized_project: Path, + monkeypatch: pytest.MonkeyPatch, +): + monkeypatch.setattr(rosters, "_fetch_available_rosters", lambda _repo_url: [{"path": "skills/foo", "type": "tree", "sha": "abc"}]) + monkeypatch.setattr( + rosters, + "_fetch_json", + lambda url: {"tree": [{"path": "skill.md", "type": "blob"}, {"path": "helpers/load.py", "type": "blob"}]} + if "abc" in url + else {"tree": [{"path": "skills/foo", "type": "tree", "sha": "abc"}]}, + ) def fake_download(url: str, output_path: Path): output_path.parent.mkdir(parents=True, exist_ok=True) - output_path.write_text("single file\n", encoding="utf-8") + output_path.write_text(f"downloaded from {url}\n", encoding="utf-8") monkeypatch.setattr(rosters, "_download_file", fake_download) - result = run_myteam_inprocess(initialized_project, "download", "starter.md") + result = run_myteam_inprocess(initialized_project, "download", "skills/foo", "bar/baz") assert result.exit_code == 0 - assert (initialized_project / ".myteam" / "starter.md").read_text(encoding="utf-8") == "single file\n" + managed_root = initialized_project / ".myteam" / "bar" / "baz" + assert (managed_root / "skill.md").exists() + assert (managed_root / "helpers" / "load.py").exists() + metadata = yaml.safe_load((managed_root / ".source.yml").read_text(encoding="utf-8")) + assert metadata["roster"] == "skills/foo" + assert metadata["local_path"] == ".myteam/bar/baz" + + +def test_download_single_file_roster_fails(run_myteam_inprocess, initialized_project: Path, monkeypatch: pytest.MonkeyPatch): + monkeypatch.setattr(rosters, "_fetch_available_rosters", lambda _repo_url: [{"path": "starter.md", "type": "blob"}]) + + result = run_myteam_inprocess(initialized_project, "download", "starter.md") + + assert result.exit_code == 1 + assert "folder rosters are supported" in result.stderr + + +def test_download_existing_same_source_directs_user_to_update( + run_myteam_inprocess, + initialized_project: Path, + monkeypatch: pytest.MonkeyPatch, +): + managed_root = initialized_project / ".myteam" / "starter" + managed_root.mkdir(parents=True) + (managed_root / ".source.yml").write_text("repo: beanlab/rosters\nroster: starter\n", encoding="utf-8") + monkeypatch.setattr(rosters, "_fetch_available_rosters", lambda _repo_url: [{"path": "starter", "type": "tree", "sha": "abc"}]) + + result = run_myteam_inprocess(initialized_project, "download", "starter") + + assert result.exit_code == 1 + assert "myteam update .myteam/starter" in result.stderr + + +def test_download_existing_unrelated_destination_fails_clearly( + run_myteam_inprocess, + initialized_project: Path, + monkeypatch: pytest.MonkeyPatch, +): + managed_root = initialized_project / ".myteam" / "starter" + managed_root.mkdir(parents=True) + (managed_root / "role.md").write_text("local content\n", encoding="utf-8") + monkeypatch.setattr(rosters, "_fetch_available_rosters", lambda _repo_url: [{"path": "starter", "type": "tree", "sha": "abc"}]) + + result = run_myteam_inprocess(initialized_project, "download", "starter") + + assert result.exit_code == 1 + assert "delete it or choose a different destination" in result.stderr def test_download_missing_roster_fails_with_available_names(run_myteam_inprocess, initialized_project: Path, monkeypatch: pytest.MonkeyPatch): From 37ec07f91afd9191901456d874dda33e8e442551 Mon Sep 17 00:00:00 2001 From: gbean Date: Sat, 28 Mar 2026 19:17:30 -0600 Subject: [PATCH 08/14] Implement managed roster downloads --- src/myteam/rosters.py | 100 +++++++++++++++++++++++++++++++++--------- 1 file changed, 80 insertions(+), 20 deletions(-) diff --git a/src/myteam/rosters.py b/src/myteam/rosters.py index 06d5bb4..445933a 100644 --- a/src/myteam/rosters.py +++ b/src/myteam/rosters.py @@ -5,21 +5,34 @@ import sys import urllib.request from collections.abc import Iterable +from datetime import UTC, datetime from pathlib import Path +import yaml + APP_NAME = "myteam" AGENTS_DIRNAME = ".myteam" DEFAULT_REPO = "beanlab/rosters" +SOURCE_METADATA = ".source.yml" +DEFAULT_REF = "main" def _agents_root(base: Path) -> Path: return base / AGENTS_DIRNAME -def _download_destination(base: Path, destination: Path | str | None) -> Path: +def _download_destination(base: Path, roster: str, destination: Path | str | None) -> Path: + agents_root = _agents_root(base) if destination is None: - return _agents_root(base) - return Path(destination) + return agents_root / Path(roster) + return agents_root / Path(destination) + + +def _display_path(base: Path, path: Path) -> str: + try: + return path.relative_to(base).as_posix() + except ValueError: + return path.as_posix() def _repo_urls(repo: str) -> tuple[str, str]: @@ -90,16 +103,11 @@ def _fetch_tree_files(roster_tree, roster_repository_url: str): return file_entries -def _blob_destination(blob_object: dict, destination: Path) -> Path: - file_name = blob_object.get("path", "").split("/")[-1] - return destination / file_name - - -def _download_blob(blob_object: dict, destination: Path, roster_raw_base_url: str): - output_path = _blob_destination(blob_object, destination) - file_name = output_path.name - print(f"\rDownloading {file_name}") - _download_file(f"{roster_raw_base_url}/{blob_object.get('path')}", output_path) +def _require_tree_roster(roster_entry: dict, roster_name: str) -> None: + if roster_entry.get("type") == "tree": + return + print(f"Roster '{roster_name}' is a file. Only folder rosters are supported.", file=sys.stderr) + exit(1) def _tree_file_url(roster_dir_name: str, entry: dict, roster_raw_base_url: str) -> str: @@ -125,12 +133,60 @@ def _download_tree_files(file_entries: Iterable[dict], roster_dir_name: str, des print("", file=sys.stderr) -def _download_roster_entry(roster_entry: dict, roster_name: str, destination: Path, roster_repository_url: str, roster_raw_base_url: str): - if roster_entry.get("type") == "blob": - _download_blob(roster_entry, destination, roster_raw_base_url) +def _source_metadata_path(destination: Path) -> Path: + return destination / SOURCE_METADATA + + +def _read_source_metadata(destination: Path) -> dict[str, str] | None: + metadata_path = _source_metadata_path(destination) + if not metadata_path.exists(): + return None + try: + loaded = yaml.safe_load(metadata_path.read_text(encoding="utf-8")) + except yaml.YAMLError: + return None + if not isinstance(loaded, dict): + return None + return {str(key): str(value) for key, value in loaded.items() if value is not None} + + +def _same_source(existing_metadata: dict[str, str] | None, repo: str, roster_name: str) -> bool: + if existing_metadata is None: + return False + return existing_metadata.get("repo") == repo and existing_metadata.get("roster") == roster_name + + +def _ensure_destination_available(base: Path, destination: Path, repo: str, roster_name: str) -> None: + if not destination.exists(): return - tree_files = _fetch_tree_files(roster_entry, roster_repository_url) - _download_tree_files(tree_files, roster_name, destination, roster_raw_base_url) + display_path = _display_path(base, destination) + if _same_source(_read_source_metadata(destination), repo, roster_name): + print( + f"Managed download already exists at {display_path}. Run `myteam update {display_path}` instead.", + file=sys.stderr, + ) + exit(1) + print( + f"Unrelated content already exists at {display_path}; delete it or choose a different destination.", + file=sys.stderr, + ) + exit(1) + + +def _source_metadata(base: Path, destination: Path, repo: str, roster_name: str) -> dict[str, str]: + return { + "repo": repo, + "roster": roster_name, + "ref": DEFAULT_REF, + "local_path": _display_path(base, destination), + "downloaded_at": datetime.now(UTC).isoformat(), + } + + +def _write_source_metadata(base: Path, destination: Path, repo: str, roster_name: str) -> None: + destination.mkdir(parents=True, exist_ok=True) + metadata = _source_metadata(base, destination, repo, roster_name) + _source_metadata_path(destination).write_text(yaml.safe_dump(metadata, sort_keys=True), encoding="utf-8") def download_roster( @@ -139,10 +195,14 @@ def download_roster( repo: str = DEFAULT_REPO, ): base = Path.cwd() - destination = _download_destination(base, destination) + destination = _download_destination(base, roster_dir_name, destination) roster_repository_url, roster_raw_base_url = _repo_urls(repo) roster_entry = _fetch_roster_entry(roster_dir_name, roster_repository_url) - _download_roster_entry(roster_entry, roster_dir_name, destination, roster_repository_url, roster_raw_base_url) + _require_tree_roster(roster_entry, roster_dir_name) + _ensure_destination_available(base, destination, repo, roster_dir_name) + tree_files = _fetch_tree_files(roster_entry, roster_repository_url) + _download_tree_files(tree_files, roster_dir_name, destination, roster_raw_base_url) + _write_source_metadata(base, destination, repo, roster_dir_name) def list_available_rosters(repo: str = DEFAULT_REPO): From c82df62fa8b1b47d7e3c9be8f1a733580fc9554d Mon Sep 17 00:00:00 2001 From: gbean Date: Sat, 28 Mar 2026 19:18:45 -0600 Subject: [PATCH 09/14] Add migration notes for managed downloads --- src/myteam/migrations/0.3.0.md | 28 ++++++++++++++++++++++++++++ 1 file changed, 28 insertions(+) create mode 100644 src/myteam/migrations/0.3.0.md diff --git a/src/myteam/migrations/0.3.0.md b/src/myteam/migrations/0.3.0.md new file mode 100644 index 0000000..48c106c --- /dev/null +++ b/src/myteam/migrations/0.3.0.md @@ -0,0 +1,28 @@ +## 0.3.0 migration + +Version `0.3.0` changes roster downloads from a flat copy operation into a managed-folder install. + +### What changed + +- `myteam download` now supports folder rosters only. +- Downloaded content is installed into a managed local folder rather than being copied directly into + the target directory root. +- Each downloaded folder now receives a `.source.yml` file at its root so future commands can track + where the content came from. +- Existing `download` targets are treated as owned managed folders or unrelated content, instead of + being merged into in place. + +### How to migrate an existing `.myteam` folder + +1. Review any content in `.myteam/` that was previously installed with `myteam download`. +2. If a downloaded roster was copied directly into `.myteam/` without its own folder, move that + content into a dedicated folder before relying on the new download workflow. +3. If you have any single-file roster installs, replace them with folder rosters. `myteam download` + no longer supports single-file targets. +4. Re-download each managed roster folder with the new `myteam download` behavior so it gets a + `.source.yml` file at the folder root. +5. Keep manually authored roles, skills, and other project-owned `.myteam` content in place; this + migration only applies to downloaded roster installs. +6. After the downloaded folders have been reinstalled in managed form, future provenance-aware + commands can recognize their source information. + From 90cf2ffbc7095154cd8d6e35686f8d9e19f6c90b Mon Sep 17 00:00:00 2001 From: gbean Date: Sat, 28 Mar 2026 19:19:59 -0600 Subject: [PATCH 10/14] Conclude managed download feature --- README.md | 16 +++++++++++++++- pyproject.toml | 2 +- src/CHANGELOG.md | 12 ++++++++++++ src/myteam/migrations/{0.3.0.md => 0.2.7.md} | 5 ++--- uv.lock | 2 +- 5 files changed, 31 insertions(+), 6 deletions(-) rename src/myteam/migrations/{0.3.0.md => 0.2.7.md} (94%) diff --git a/README.md b/README.md index 7037e3d..fa57a69 100644 --- a/README.md +++ b/README.md @@ -270,7 +270,21 @@ Lists available downloadable rosters from the default roster repository. ### `myteam download ` -Downloads a roster into `.myteam/` by default. +Downloads a folder roster into `.myteam/` by default. + +By default, the roster path is preserved under `.myteam/`, so `myteam download skills/foo` installs +into `.myteam/skills/foo/`. If you provide a destination path, that path becomes the managed install +root under `.myteam/`. + +Each downloaded folder gets a `.source.yml` file at its root so future commands can track where it +came from. + +If the destination already exists, `myteam download` fails instead of merging into it: + +- if the existing folder is the same managed source, run `myteam update ` instead +- if the existing folder is unrelated content, delete it or choose a different destination + +Single-file roster downloads are not supported. Useful when you want to seed an agent system from a reusable template instead of authoring it from scratch. diff --git a/pyproject.toml b/pyproject.toml index 7ca8053..5425e57 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -1,6 +1,6 @@ [project] name = "myteam" -version = "0.2.6" +version = "0.2.7" description = "Agent roster CLI" readme = "README.md" requires-python = ">=3.11" diff --git a/src/CHANGELOG.md b/src/CHANGELOG.md index 3621429..453f9b6 100644 --- a/src/CHANGELOG.md +++ b/src/CHANGELOG.md @@ -1,5 +1,17 @@ # Change Log +## 0.2.7 + +- `myteam download` now installs only folder rosters as managed local folders instead of flattening + roster contents directly into the destination root. +- Default downloads preserve the remote roster path under `.myteam/`, and explicit destinations are + treated as managed install roots under `.myteam/`. +- Managed roster installs now write `.source.yml` at the folder root so future commands can track + their origin. +- `myteam download` now fails when the target already exists, directing same-source reinstalls toward + `myteam update ` and rejecting unrelated existing content. +- Removed support for single-file roster downloads. + ## 0.2.6 - `myteam init` now stores the creating `myteam` version in `.myteam/.myteam-version`. diff --git a/src/myteam/migrations/0.3.0.md b/src/myteam/migrations/0.2.7.md similarity index 94% rename from src/myteam/migrations/0.3.0.md rename to src/myteam/migrations/0.2.7.md index 48c106c..d763589 100644 --- a/src/myteam/migrations/0.3.0.md +++ b/src/myteam/migrations/0.2.7.md @@ -1,6 +1,6 @@ -## 0.3.0 migration +## 0.2.7 migration -Version `0.3.0` changes roster downloads from a flat copy operation into a managed-folder install. +Version `0.2.7` changes roster downloads from a flat copy operation into a managed-folder install. ### What changed @@ -25,4 +25,3 @@ Version `0.3.0` changes roster downloads from a flat copy operation into a manag migration only applies to downloaded roster installs. 6. After the downloaded folders have been reinstalled in managed form, future provenance-aware commands can recognize their source information. - diff --git a/uv.lock b/uv.lock index e334b13..095c558 100644 --- a/uv.lock +++ b/uv.lock @@ -141,7 +141,7 @@ wheels = [ [[package]] name = "myteam" -version = "0.2.6" +version = "0.2.7" source = { editable = "." } dependencies = [ { name = "fire" }, From 23cb1983a89f40d5b11601b332b5cffd45d5583b Mon Sep 17 00:00:00 2001 From: gbean Date: Sat, 28 Mar 2026 19:20:46 -0600 Subject: [PATCH 11/14] testing skill --- .myteam/testing/skill.md | 1 + 1 file changed, 1 insertion(+) diff --git a/.myteam/testing/skill.md b/.myteam/testing/skill.md index d7ec51f..32359c8 100644 --- a/.myteam/testing/skill.md +++ b/.myteam/testing/skill.md @@ -28,3 +28,4 @@ uv run pytest Tests are found in `tests/`. +The full test suite takes about 30 seconds to run. Plan accordingly. \ No newline at end of file From 71d682f356d249ad3fd35204de1976d553fbf89f Mon Sep 17 00:00:00 2001 From: gbean Date: Sat, 28 Mar 2026 19:24:15 -0600 Subject: [PATCH 12/14] clarify 0.2.7 migration notes --- src/myteam/migrations/0.2.7.md | 12 +----------- 1 file changed, 1 insertion(+), 11 deletions(-) diff --git a/src/myteam/migrations/0.2.7.md b/src/myteam/migrations/0.2.7.md index d763589..6095794 100644 --- a/src/myteam/migrations/0.2.7.md +++ b/src/myteam/migrations/0.2.7.md @@ -14,14 +14,4 @@ Version `0.2.7` changes roster downloads from a flat copy operation into a manag ### How to migrate an existing `.myteam` folder -1. Review any content in `.myteam/` that was previously installed with `myteam download`. -2. If a downloaded roster was copied directly into `.myteam/` without its own folder, move that - content into a dedicated folder before relying on the new download workflow. -3. If you have any single-file roster installs, replace them with folder rosters. `myteam download` - no longer supports single-file targets. -4. Re-download each managed roster folder with the new `myteam download` behavior so it gets a - `.source.yml` file at the folder root. -5. Keep manually authored roles, skills, and other project-owned `.myteam` content in place; this - migration only applies to downloaded roster installs. -6. After the downloaded folders have been reinstalled in managed form, future provenance-aware - commands can recognize their source information. +There is nothing that needs to be done to migrate to this version. From d6d7fc29010d63ef150ce8ea8ad3c7f5f9308a28 Mon Sep 17 00:00:00 2001 From: gbean Date: Sat, 28 Mar 2026 19:26:49 -0600 Subject: [PATCH 13/14] Rename update backlog document --- src/governing_docs/backlog/{download.md => update.md} | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) rename src/governing_docs/backlog/{download.md => update.md} (98%) diff --git a/src/governing_docs/backlog/download.md b/src/governing_docs/backlog/update.md similarity index 98% rename from src/governing_docs/backlog/download.md rename to src/governing_docs/backlog/update.md index 73f9fdf..a08ce20 100644 --- a/src/governing_docs/backlog/download.md +++ b/src/governing_docs/backlog/update.md @@ -1,4 +1,4 @@ -# Download Provenance and Update Design +# Update Design ## Summary From e52faa7e93e8a5bf39662bdf9d7ab738e19afc44 Mon Sep 17 00:00:00 2001 From: gbean Date: Sat, 28 Mar 2026 19:28:39 -0600 Subject: [PATCH 14/14] Add backlog grooming skill backlog item --- .../backlog/backlog-grooming-skill.md | 135 ++++++++++++++++++ 1 file changed, 135 insertions(+) create mode 100644 src/governing_docs/backlog/backlog-grooming-skill.md diff --git a/src/governing_docs/backlog/backlog-grooming-skill.md b/src/governing_docs/backlog/backlog-grooming-skill.md new file mode 100644 index 0000000..b3a005a --- /dev/null +++ b/src/governing_docs/backlog/backlog-grooming-skill.md @@ -0,0 +1,135 @@ +# Backlog Grooming Skill + +## Summary + +Add a dedicated backlog-and-grooming skill that teaches agents how to write backlog documents, how to +groom the backlog consistently, and how to maintain a project-level view of backlog dependencies and +priorities. + +This is not just a one-time documentation task. The intent is to create a repeatable operating +process so backlog work stays readable, comparable, and actionable as the backlog grows. + +## Problems + +### Backlog items do not yet have an explicit authoring standard + +The existing backlog documents are generally coherent, but the repo does not yet define: + +- when a backlog item should exist instead of a feature plan +- what sections a backlog document should include +- how much implementation detail is appropriate at backlog stage +- how backlog items should refer to related work or dependencies + +That makes the quality of backlog docs depend too much on local judgment. + +### There is no explicit grooming process + +Backlog items currently accumulate as design notes, but there is no maintained process for: + +- reviewing older items for staleness +- splitting oversized items +- merging duplicates +- identifying prerequisite relationships +- identifying which items are most urgent or strategically important + +Without grooming, the backlog becomes harder to use as a planning tool. + +### Dependencies and priorities are not tracked in one place + +Some dependencies are mentioned inside individual backlog docs, but there is no single maintained +document that answers questions such as: + +- what items block other items +- what work can proceed independently +- which backlog items are highest priority right now +- which items are design follow-ups versus implementation-ready work + +That forces readers to reconstruct planning state from scattered notes. + +## Goals + +- Provide a built-in or project skill that teaches agents how to create and maintain backlog docs. +- Standardize the expected format and scope of backlog documents. +- Define a repeatable grooming workflow for reviewing and updating backlog items. +- Maintain a dedicated document that tracks backlog dependencies, sequencing, and priorities. +- Keep backlog documentation lightweight enough to maintain, while still useful for planning. + +## Proposed Change + +### Add a backlog-and-grooming skill + +Create a skill that agents load when they are: + +- writing a new backlog item +- updating an existing backlog item +- grooming the backlog +- planning cross-cutting design work + +The skill should explain: + +- when to create a backlog doc versus a feature plan +- the standard structure for backlog docs +- how to describe scope boundaries and open questions +- how to record follow-up work without turning backlog docs into implementation transcripts +- how to update the backlog dependency/priority tracker + +### Standardize backlog doc format + +The skill should define a preferred format for backlog documents. A reasonable baseline is: + +- `Summary` +- `Problems` or `Problem` +- `Goals` +- `Proposed Change` or `Proposed Direction` +- `Scope Boundaries` +- `Open Questions` or `Open Follow-Up Work` + +Not every document must use identical section names, but the skill should define the expected shape +and purpose of each section so backlog items remain comparable. + +### Define a backlog grooming process + +The skill should define a recurring grooming workflow such as: + +1. Review recently added backlog docs. +2. Check older docs for staleness or changed assumptions. +3. Merge duplicates or split overloaded items. +4. Identify prerequisite relationships and sequencing constraints. +5. Assign or revise priority labels or ordering. +6. Update the dependency/priority tracker to reflect the current view. + +The process should also explain what kinds of edits belong in a grooming pass versus requiring +separate feature design work. + +### Maintain a dependency and priority tracker + +Add a dedicated governing doc that summarizes backlog relationships in one place. + +That tracker should make it easy to answer: + +- which items are top priority +- which items are blocked by other items +- which items are prerequisites for broad roadmap themes +- which items are still exploratory versus ready for feature planning + +The tracker can stay lightweight, but it should be authoritative enough that agents do not need to +infer roadmap state from scattered backlog prose. + +## Scope Boundaries + +- This item is about backlog process and documentation quality, not about implementing any specific + product feature. +- This does not require a heavy project-management system or external tooling. +- This does not require every existing backlog doc to be rewritten immediately, though some cleanup may + be needed as part of adoption. + +## Open Questions + +- Should the backlog-and-grooming skill be packaged as a built-in skill, a project-local skill, or + both? +- Should the dependency/priority tracker use simple prose sections, a table, or a structured YAML-like + format? +- Should priorities be expressed as ordered lists, named buckets, or explicit status markers such as + `exploratory`, `ready`, and `blocked`? +- How often should backlog grooming be expected: opportunistically during related work, or as a + dedicated recurring maintenance task?