diff --git a/README.md b/README.md index 2aaf7a7..1119cd6 100644 --- a/README.md +++ b/README.md @@ -50,37 +50,16 @@ specleft features add-scenario \ specleft status ``` -### Path 2: Bulk-generate feature specs from a PRD +### Path 2: Discover specs from existing code -Create `prd.md` describing intended behavior. - -**Recommended**: Update `.specleft/templates/prd-template.yml` to customize how your PRD sections map to features/scenarios. - -Then run: +For brownfield projects, discover and stage draft specs from existing tests/routes/docstrings: ```bash - -# Generate specs from the PRD without writing files (remove --dry-run to write) -specleft plan --dry-run - -# Validate the generated specs -specleft features validate - -# Preview skeleton generation (remove --dry-run to generate) -specleft test skeleton --dry-run - -# Confirm and generate skeleton tests -specleft test skeleton - -# Show traceability / coverage status +specleft discover +specleft discover promote --all specleft status - -# Run your tests with pytest as normal -pytest ``` -That flow converts `prd.md` into `.specleft/specs/*.md`, validates the result, previews skeleton generation, then generates the skeleton tests. - ## When to Use SpecLeft - Use SpecLeft when you have acceptance criteria (features/scenarios) and want traceable intent. @@ -110,6 +89,7 @@ Otherwise begin with: specleft doctor --format json specleft contract --format json specleft features stats --format json +specleft discover --format json ``` SpecLeft includes a verifiable skill file at `.specleft/SKILL.md`. Verify integrity with: diff --git a/docs/SKILL.md b/docs/SKILL.md index fd033bf..5f6fa7d 100644 --- a/docs/SKILL.md +++ b/docs/SKILL.md @@ -81,6 +81,12 @@ Builds an HTML report from `.specleft/results/`. `--analyze` inspects PRD structure without writing files. `--template` uses a YAML section-matching template. +## Discovery + +`specleft discover --format json [PROJECT_ROOT] [--dry-run] [--language python|typescript] [--output-dir PATH] [--specs-dir PATH]` + +`specleft discover promote --format json [--all] [FEATURE_ID...] [--specs-dir PATH] [--overwrite] [--dry-run]` + ## Contract ### Show contract diff --git a/docs/cli-reference.md b/docs/cli-reference.md index 63eccd1..f89f191 100644 --- a/docs/cli-reference.md +++ b/docs/cli-reference.md @@ -173,6 +173,83 @@ Options: --format [table|json] Output format (default: table) ``` +## Discover + +### `specleft discover` + +Run the full discovery pipeline and stage generated draft specs. + +```bash +specleft discover [OPTIONS] [PROJECT_ROOT] + +Options: + --format [table|json] Output format (default: auto-detect TTY) + --dry-run Preview without writing files + --output-dir PATH Override staging dir (default: .specleft/specs/_discovered/) + --language [python|typescript] + Limit languages (repeatable) + --specs-dir PATH Existing specs dir for traceability matching + --pretty Pretty-print JSON output +``` + +Table output example: + +```text +Scanning project... + ✓ Python (pytest) — 142 test functions + ✓ API routes — 24 routes + ✓ Docstrings — 67 items + ✓ Git history — 200 commits + +Generating draft specs... + + Feature Scenarios Written to + ─────────────────────── ───────── ────────────────────────────────────── + user-authentication 8 .specleft/specs/_discovered/user-authentication.md + payment-processing 5 .specleft/specs/_discovered/payment-processing.md + + 14 features, 47 scenarios written to .specleft/specs/_discovered + Review drafts, then promote with: specleft discover promote +``` + +JSON output schema: + +```json +{ + "features": [ + { + "feature_id": "user-authentication", + "name": "User Authentication", + "scenario_count": 8, + "output_file": ".specleft/specs/_discovered/user-authentication.md", + "confidence": 0.8 + } + ], + "total_features": 14, + "total_scenarios": 47, + "output_dir": ".specleft/specs/_discovered", + "dry_run": false, + "written": [".specleft/specs/_discovered/user-authentication.md"], + "errors": [] +} +``` + +### `specleft discover promote` + +Promote draft specs into the active specs directory. Draft files remain in staging. + +```bash +specleft discover promote [OPTIONS] [FEATURE_ID...] + +Options: + --all Promote every staged draft file + --specs-dir PATH Destination specs dir (default: resolve_specs_dir()) + --overwrite Replace existing files at destination + --dry-run Preview without writing files + --format [table|json] + --pretty +``` + ## Status ### `specleft status` diff --git a/docs/getting-started.md b/docs/getting-started.md index 22df1bc..0df39ad 100644 --- a/docs/getting-started.md +++ b/docs/getting-started.md @@ -14,6 +14,15 @@ mkdir -p .specleft/specs/calculator/addition Create `.specleft/specs/calculator/_feature.md`, `.specleft/specs/calculator/addition/_story.md`, and a scenario file like `.specleft/specs/calculator/addition/basic_addition.md`. +## Discover Specs From Existing Code + +```bash +specleft discover +specleft discover promote --all +``` + +`specleft discover` stages generated drafts in `.specleft/specs/_discovered/` so they can be reviewed before promotion. + ## Generate Tests ```bash diff --git a/features/feature-spec-discovery.md b/features/feature-spec-discovery.md index 8877a75..35d656e 100644 --- a/features/feature-spec-discovery.md +++ b/features/feature-spec-discovery.md @@ -222,6 +222,29 @@ Add shared discovery infrastructure for Issues #125 and #126: centralized parser **When** `generate_draft_specs(..., traceability_links=links)` writes markdown **Then** the scenario block includes a `linked_tests` frontmatter section with file, function, and confidence values. +### Story 16: Discover command and draft promotion flow +**Scenario:** As a user onboarding from an existing codebase, I need one command to run discovery and stage draft specs. +**Given** a repository with supported source files and tests +**When** I run `specleft discover` +**Then** discovery runs end-to-end and writes draft specs to `.specleft/specs/_discovered/`. + +**Scenario:** As an automation client, I need structured machine-readable output. +**Given** discovery succeeds or miners report partial failures +**When** I run `specleft discover --format json` +**Then** the command exits successfully and outputs valid JSON with features, totals, output dir, dry-run state, and `errors`. + +**Scenario:** As a user reviewing generated drafts, I need a safe promotion step. +**Given** staged files exist in `.specleft/specs/_discovered/` +**When** I run `specleft discover promote --all` +**Then** all draft files are copied to the active specs directory. +**And** the files remain in `_discovered/` after promotion. + +**Scenario:** As a user promoting incrementally, I need targeted and non-destructive behavior. +**Given** staged draft files and existing active specs +**When** I run `specleft discover promote user-authentication` +**Then** only that feature file is copied. +**And** existing destination files are skipped unless `--overwrite` is passed. + ## Acceptance Criteria - Language abstraction returns `SupportedLanguage` members for `.py`, `.ts`, `.tsx`, `.js`, `.jsx`, `.mjs` and `None` otherwise. - `LanguageRegistry().parse(path_to_py_file)` returns `(node, SupportedLanguage.PYTHON)` for valid Python input. @@ -299,3 +322,8 @@ Add shared discovery infrastructure for Issues #125 and #126: centralized parser - `specleft status` marks convention-linked scenarios as implemented with `match_kind="convention"` in verbose JSON output. - `specleft status --format table` displays `✓ (convention)` for convention-linked scenarios. - `generate_draft_specs(..., traceability_links=...)` emits `linked_tests` frontmatter for matched scenarios. +- `specleft discover --dry-run` does not write files and reports planned outputs. +- `specleft discover --format json` exits `0` with valid JSON output even when miners report errors. +- `specleft discover` writes drafts to `.specleft/specs/_discovered/` by default and supports `--output-dir` override. +- `specleft discover promote --all` copies staged drafts to active specs while keeping staged drafts intact. +- `specleft discover promote ` copies only the requested draft and skips existing files unless `--overwrite`. diff --git a/src/specleft/cli/main.py b/src/specleft/cli/main.py index 6496c1a..1fd8c06 100644 --- a/src/specleft/cli/main.py +++ b/src/specleft/cli/main.py @@ -10,6 +10,7 @@ from specleft.commands import ( contract, coverage, + discover, doctor, features, guide, @@ -48,6 +49,7 @@ def cli() -> None: cli.add_command(coverage) cli.add_command(init) cli.add_command(contract) +cli.add_command(discover) cli.add_command(skill_group) cli.add_command(guide) cli.add_command(mcp) diff --git a/src/specleft/commands/__init__.py b/src/specleft/commands/__init__.py index 1e29fb9..578fa22 100644 --- a/src/specleft/commands/__init__.py +++ b/src/specleft/commands/__init__.py @@ -7,6 +7,7 @@ from specleft.commands.contract import contract from specleft.commands.coverage import coverage +from specleft.commands.discover import discover from specleft.commands.doctor import doctor from specleft.commands.features import features from specleft.commands.guide import guide @@ -21,6 +22,7 @@ __all__ = [ "contract", "coverage", + "discover", "doctor", "features", "guide", diff --git a/src/specleft/commands/discover.py b/src/specleft/commands/discover.py new file mode 100644 index 0000000..19bb011 --- /dev/null +++ b/src/specleft/commands/discover.py @@ -0,0 +1,498 @@ +# SPDX-License-Identifier: Apache-2.0 +# Copyright (c) 2026 SpecLeft Contributors + +"""Discovery command group.""" + +from __future__ import annotations + +import shutil +import sys +from dataclasses import dataclass +from pathlib import Path +from typing import Any + +import click + +from specleft.commands.output import json_dumps, resolve_output_format +from specleft.discovery.grouping import group_items +from specleft.discovery.models import ( + DEFAULT_DISCOVERY_OUTPUT_DIR, + DiscoveryReport, + DraftFeature, + ItemKind, + SupportedLanguage, +) +from specleft.discovery.pipeline import build_default_pipeline +from specleft.discovery.spec_writer import generate_draft_specs +from specleft.discovery.traceability import infer_traceability +from specleft.schema import SpecsConfig +from specleft.utils.specs_dir import ( + DEFAULT_SPECS_DIR, + FALLBACK_SPECS_DIR, +) + + +@dataclass(frozen=True) +class _DiscoverSummary: + features: list[DraftFeature] + written_paths: list[Path] + errors: list[str] + output_dir: Path + report: DiscoveryReport + + +@click.group( + "discover", + invoke_without_command=True, + context_settings={"allow_extra_args": True}, +) +@click.option( + "--format", + "format_type", + type=click.Choice(["table", "json"], case_sensitive=False), + default=None, + help="Output format. Defaults to table in a terminal and json otherwise.", +) +@click.option("--dry-run", is_flag=True, help="Preview without writing any files.") +@click.option( + "--output-dir", + type=click.Path(file_okay=False, dir_okay=True, path_type=Path), + default=None, + help="Override output dir (default: .specleft/specs/_discovered/).", +) +@click.option( + "--language", + "languages", + type=click.Choice(["python", "typescript"], case_sensitive=False), + multiple=True, + help="Limit discovery to a language (repeatable).", +) +@click.option( + "--specs-dir", + type=click.Path(file_okay=False, dir_okay=True, path_type=Path), + default=None, + help="Existing specs dir for traceability matching.", +) +@click.option("--pretty", is_flag=True, help="Pretty-print JSON output.") +@click.pass_context +def discover( + ctx: click.Context, + format_type: str | None, + dry_run: bool, + output_dir: Path | None, + languages: tuple[str, ...], + specs_dir: Path | None, + pretty: bool, +) -> None: + """Run discovery and generate draft specs.""" + if ctx.invoked_subcommand is not None: + return + + project_root = _parse_project_root_arg(ctx.args) + root = _resolve_project_root(project_root) + selected_format = resolve_output_format(format_type) + language_filter = _resolve_language_filter(languages) + resolved_output_dir = _resolve_output_dir(root, output_dir) + resolved_specs_dir = _resolve_specs_dir(root, specs_dir) + + summary = _run_discovery( + root=root, + output_dir=resolved_output_dir, + specs_dir=resolved_specs_dir, + dry_run=dry_run, + language_filter=language_filter, + ) + + if selected_format == "json": + payload = _build_discover_json(summary, dry_run=dry_run, root=root) + click.echo(json_dumps(payload, pretty=pretty)) + return + + _print_discover_table(summary, dry_run=dry_run, root=root) + + +@discover.command("promote") +@click.argument("feature_ids", nargs=-1) +@click.option("--all", "promote_all", is_flag=True, help="Promote all draft files.") +@click.option( + "--specs-dir", + type=click.Path(file_okay=False, dir_okay=True, path_type=Path), + default=None, + help="Destination specs dir (default: resolved via resolve_specs_dir()).", +) +@click.option("--overwrite", is_flag=True, help="Replace existing destination files.") +@click.option("--dry-run", is_flag=True, help="Preview without writing files.") +@click.option( + "--format", + "format_type", + type=click.Choice(["table", "json"], case_sensitive=False), + default=None, + help="Output format. Defaults to table in a terminal and json otherwise.", +) +@click.option("--pretty", is_flag=True, help="Pretty-print JSON output.") +def discover_promote( + feature_ids: tuple[str, ...], + promote_all: bool, + specs_dir: Path | None, + overwrite: bool, + dry_run: bool, + format_type: str | None, + pretty: bool, +) -> None: + """Promote draft specs into the active specs directory.""" + if not promote_all and not feature_ids: + click.secho("Use --all or provide one or more FEATURE_ID values.", fg="red") + sys.exit(1) + + root = Path.cwd().resolve() + source_dir = _resolve_output_dir(root, None) + destination_dir = _resolve_specs_dir(root, specs_dir) + selected_format = resolve_output_format(format_type) + + payload = _promote_specs( + source_dir=source_dir, + destination_dir=destination_dir, + feature_ids=feature_ids, + promote_all=promote_all, + overwrite=overwrite, + dry_run=dry_run, + root=root, + ) + + if selected_format == "json": + click.echo(json_dumps(payload, pretty=pretty)) + return + _print_promote_table(payload) + + +def _resolve_project_root(project_root: Path | None) -> Path: + candidate = project_root if project_root is not None else Path(".") + return candidate.resolve() + + +def _parse_project_root_arg(args: list[str]) -> Path | None: + if not args: + return None + if len(args) > 1: + raise click.UsageError("Too many arguments. Expected at most one PROJECT_ROOT.") + return Path(args[0]) + + +def _resolve_output_dir(root: Path, output_dir: Path | None) -> Path: + candidate = output_dir if output_dir is not None else DEFAULT_DISCOVERY_OUTPUT_DIR + if candidate.is_absolute(): + return candidate + return (root / candidate).resolve() + + +def _resolve_specs_dir(root: Path, specs_dir: Path | None) -> Path: + if specs_dir is not None: + if specs_dir.is_absolute(): + return specs_dir + return (root / specs_dir).resolve() + + preferred = root / DEFAULT_SPECS_DIR + if preferred.exists(): + return preferred + + fallback = root / FALLBACK_SPECS_DIR + if fallback.exists(): + return fallback + return preferred + + +def _resolve_language_filter(raw: tuple[str, ...]) -> list[SupportedLanguage] | None: + if not raw: + return None + + resolved: list[SupportedLanguage] = [] + for language in raw: + normalized = language.strip().lower() + if normalized == "python": + resolved.append(SupportedLanguage.PYTHON) + continue + + resolved.append(SupportedLanguage.TYPESCRIPT) + resolved.append(SupportedLanguage.JAVASCRIPT) + + deduped: list[SupportedLanguage] = [] + seen: set[SupportedLanguage] = set() + for language in resolved: + if language in seen: + continue + seen.add(language) + deduped.append(language) + return deduped + + +def _run_discovery( + *, + root: Path, + output_dir: Path, + specs_dir: Path, + dry_run: bool, + language_filter: list[SupportedLanguage] | None, +) -> _DiscoverSummary: + pipeline = build_default_pipeline(root, languages=language_filter) + report = pipeline.run() + + draft_features = sorted( + group_items(report.all_items), + key=lambda feature: (feature.feature_id, feature.name), + ) + + specs_config, spec_errors = _load_specs_config(specs_dir) + traceability_links = infer_traceability(report.all_items, specs_config) + written_paths = generate_draft_specs( + draft_features, + output_dir, + dry_run=dry_run, + traceability_links=traceability_links, + ) + + errors = list(report.errors) + errors.extend(spec_errors) + return _DiscoverSummary( + features=draft_features, + written_paths=written_paths, + errors=errors, + output_dir=output_dir, + report=report, + ) + + +def _load_specs_config(specs_dir: Path) -> tuple[SpecsConfig, list[str]]: + if not specs_dir.exists(): + return SpecsConfig(features=[]), [] + + try: + return SpecsConfig.from_directory(specs_dir), [] + except Exception as exc: + return SpecsConfig(features=[]), [f"Unable to parse specs: {exc}"] + + +def _build_discover_json( + summary: _DiscoverSummary, + *, + dry_run: bool, + root: Path, +) -> dict[str, Any]: + feature_rows = _feature_rows(summary.features, summary.output_dir, root) + return { + "features": feature_rows, + "total_features": len(summary.features), + "total_scenarios": sum(len(feature.scenarios) for feature in summary.features), + "output_dir": _display_path(summary.output_dir, root), + "dry_run": dry_run, + "written": [_display_path(path, root) for path in summary.written_paths], + "errors": summary.errors, + } + + +def _feature_rows( + features: list[DraftFeature], output_dir: Path, root: Path +) -> list[dict[str, Any]]: + rows: list[dict[str, Any]] = [] + for feature in features: + path = output_dir / f"{feature.feature_id}.md" + rows.append( + { + "feature_id": feature.feature_id, + "name": feature.name, + "scenario_count": len(feature.scenarios), + "output_file": _display_path(path, root), + "confidence": round(feature.confidence, 2), + } + ) + return rows + + +def _print_discover_table( + summary: _DiscoverSummary, *, dry_run: bool, root: Path +) -> None: + report = summary.report + item_counts = _item_kind_counts(report) + framework_summary = _framework_summary(report) + language_test_counts = _language_test_counts(report) + + click.echo("Scanning project...") + if report.languages_detected: + for language in report.languages_detected: + framework = framework_summary.get(language.value, "unknown") + test_count = language_test_counts.get(language, 0) + click.echo( + f" ✓ {language.value.title()} ({framework}) — {test_count} test functions" + ) + else: + click.echo(" ✓ No supported languages detected") + + click.echo(f" ✓ API routes — {item_counts['api_route']} routes") + click.echo(f" ✓ Docstrings — {item_counts['docstring']} items") + click.echo(f" ✓ Git history — {item_counts['git_commit']} commits") + click.echo("") + click.echo("Generating draft specs...") + click.echo("") + + rows = _feature_rows(summary.features, summary.output_dir, root) + if rows: + click.echo(" Feature Scenarios Written to") + click.echo( + " ─────────────────────── ───────── ─────────────────────────────" + ) + for row in rows: + click.echo( + f" {row['feature_id']:<23} {row['scenario_count']:<9} {row['output_file']}" + ) + click.echo("") + else: + click.echo(" No draft features discovered.") + click.echo("") + + scenario_total = sum(len(feature.scenarios) for feature in summary.features) + action = "would be written to" if dry_run else "written to" + click.echo( + f" {len(summary.features)} features, {scenario_total} scenarios {action} " + f"{_display_path(summary.output_dir, root)}" + ) + click.echo(" Review drafts, then promote with: specleft discover promote") + if summary.errors: + click.echo("") + click.echo("Errors:") + for error in summary.errors: + click.echo(f" - {error}") + + +def _item_kind_counts(report: DiscoveryReport) -> dict[str, int]: + return { + "test_function": len(report.items_by_kind.get(ItemKind.TEST_FUNCTION, [])), + "api_route": len(report.items_by_kind.get(ItemKind.API_ROUTE, [])), + "docstring": len(report.items_by_kind.get(ItemKind.DOCSTRING, [])), + "git_commit": len(report.items_by_kind.get(ItemKind.GIT_COMMIT, [])), + } + + +def _framework_summary(report: DiscoveryReport) -> dict[str, str]: + frameworks: dict[str, set[str]] = {} + for item in report.items_by_kind.get(ItemKind.TEST_FUNCTION, []): + if item.language is None: + continue + framework = str(item.metadata.get("framework", "unknown")).lower() + frameworks.setdefault(item.language.value, set()).add(framework) + + summarized: dict[str, str] = {} + for language, values in frameworks.items(): + usable = sorted(value for value in values if value != "unknown") + summarized[language] = ", ".join(usable) if usable else "unknown" + return summarized + + +def _language_test_counts(report: DiscoveryReport) -> dict[SupportedLanguage, int]: + counts: dict[SupportedLanguage, int] = {} + for item in report.items_by_kind.get(ItemKind.TEST_FUNCTION, []): + if item.language is None: + continue + counts[item.language] = counts.get(item.language, 0) + 1 + return counts + + +def _promote_specs( + *, + source_dir: Path, + destination_dir: Path, + feature_ids: tuple[str, ...], + promote_all: bool, + overwrite: bool, + dry_run: bool, + root: Path, +) -> dict[str, Any]: + sources = _source_files(source_dir, feature_ids, promote_all) + promoted: list[str] = [] + skipped: list[str] = [] + missing: list[str] = [] + + if not dry_run: + destination_dir.mkdir(parents=True, exist_ok=True) + + for source in sources: + if not source.exists(): + missing.append(_display_path(source, root)) + continue + + destination = destination_dir / source.name + destination_display = _display_path(destination, root) + if destination.exists() and not overwrite: + skipped.append(destination_display) + continue + + if not dry_run: + shutil.copy2(source, destination) + promoted.append(destination_display) + + errors: list[str] = [] + for missing_path in missing: + errors.append(f"Draft file not found: {missing_path}") + + return { + "source_dir": _display_path(source_dir, root), + "specs_dir": _display_path(destination_dir, root), + "dry_run": dry_run, + "overwrite": overwrite, + "promoted": promoted, + "skipped": skipped, + "missing": missing, + "errors": errors, + } + + +def _source_files( + source_dir: Path, + feature_ids: tuple[str, ...], + promote_all: bool, +) -> list[Path]: + if promote_all: + return sorted(source_dir.glob("*.md")) + + deduped = sorted( + {feature_id.strip() for feature_id in feature_ids if feature_id.strip()} + ) + return [source_dir / f"{feature_id}.md" for feature_id in deduped] + + +def _print_promote_table(payload: dict[str, Any]) -> None: + click.echo(f"Promoting draft specs from {payload['source_dir']}") + click.echo(f"Destination: {payload['specs_dir']}") + click.echo("") + + promoted: list[str] = payload["promoted"] + skipped: list[str] = payload["skipped"] + missing: list[str] = payload["missing"] + + if promoted: + click.echo("Promoted:") + for path in promoted: + click.echo(f" ✓ {path}") + else: + click.echo("Promoted: none") + + if skipped: + click.echo("Skipped (already exists):") + for path in skipped: + click.echo(f" - {path}") + + if missing: + click.echo("Missing:") + for path in missing: + click.echo(f" - {path}") + + click.echo("") + click.echo( + f"Summary: {len(promoted)} promoted, {len(skipped)} skipped, " + f"{len(missing)} missing" + ) + + +def _display_path(path: Path, root: Path) -> str: + try: + return path.resolve().relative_to(root).as_posix() + except ValueError: + return path.resolve().as_posix() diff --git a/tests/cli/test_discover.py b/tests/cli/test_discover.py new file mode 100644 index 0000000..aba85df --- /dev/null +++ b/tests/cli/test_discover.py @@ -0,0 +1,300 @@ +"""Tests for discover command.""" + +from __future__ import annotations + +import json +import uuid +from importlib import import_module +from pathlib import Path + +import pytest +from click.testing import CliRunner + +from specleft.cli.main import cli +from specleft.discovery.models import ( + DiscoveryReport, + DraftFeature, + DraftScenario, + MinerResult, + SupportedLanguage, +) +from specleft.schema import SpecStep, StepType + +discover_module = import_module("specleft.commands.discover") + + +class _FakePipeline: + def __init__(self, report: DiscoveryReport) -> None: + self._report = report + + def run(self) -> DiscoveryReport: + return self._report + + +def _report( + root: Path, + *, + errors: list[str] | None = None, + languages: list[SupportedLanguage] | None = None, +) -> DiscoveryReport: + return DiscoveryReport( + project_root=root, + languages_detected=( + languages if languages is not None else [SupportedLanguage.PYTHON] + ), + miner_results=[ + MinerResult( + miner_id=uuid.UUID("11111111-1111-1111-1111-111111111111"), + miner_name="stub", + items=[], + error=errors[0] if errors else None, + duration_ms=1, + ) + ], + total_items=0, + errors=errors or [], + duration_ms=4, + ) + + +def _draft_feature(feature_id: str, scenario_count: int = 1) -> DraftFeature: + steps = [ + SpecStep(type=StepType.GIVEN, description="a precondition"), + SpecStep(type=StepType.WHEN, description="an action happens"), + SpecStep(type=StepType.THEN, description="an outcome is observed"), + ] + scenarios = [ + DraftScenario( + title=f"{feature_id} scenario {index}", + steps=steps, + source_items=[], + ) + for index in range(1, scenario_count + 1) + ] + return DraftFeature( + feature_id=feature_id, + name=feature_id.replace("-", " ").title(), + scenarios=scenarios, + source_items=[], + confidence=0.8, + ) + + +class TestDiscoverCommand: + def test_discover_dry_run_writes_no_files( + self, monkeypatch: pytest.MonkeyPatch, tmp_path: Path + ) -> None: + runner = CliRunner() + monkeypatch.setattr( + discover_module, + "build_default_pipeline", + lambda root, languages=None: _FakePipeline(_report(root)), + ) + monkeypatch.setattr( + discover_module, + "group_items", + lambda items: [_draft_feature("user-authentication", 2)], + ) + + with runner.isolated_filesystem(temp_dir=tmp_path): + result = runner.invoke(cli, ["discover", "--dry-run", "--format", "json"]) + assert result.exit_code == 0 + + payload = json.loads(result.output) + assert payload["dry_run"] is True + assert payload["total_features"] == 1 + assert payload["total_scenarios"] == 2 + assert Path(".specleft/specs/_discovered").exists() is False + + def test_discover_json_output_schema( + self, monkeypatch: pytest.MonkeyPatch, tmp_path: Path + ) -> None: + runner = CliRunner() + monkeypatch.setattr( + discover_module, + "build_default_pipeline", + lambda root, languages=None: _FakePipeline(_report(root)), + ) + monkeypatch.setattr( + discover_module, + "group_items", + lambda items: [_draft_feature("payment-processing", 3)], + ) + + with runner.isolated_filesystem(temp_dir=tmp_path): + result = runner.invoke(cli, ["discover", "--format", "json"]) + assert result.exit_code == 0 + payload = json.loads(result.output) + + assert payload["total_features"] == 1 + assert payload["total_scenarios"] == 3 + assert payload["output_dir"] == ".specleft/specs/_discovered" + assert isinstance(payload["features"], list) + assert payload["features"][0]["feature_id"] == "payment-processing" + assert payload["features"][0]["output_file"].endswith( + "payment-processing.md" + ) + assert payload["errors"] == [] + + def test_discover_writes_to_default_output_dir( + self, monkeypatch: pytest.MonkeyPatch, tmp_path: Path + ) -> None: + runner = CliRunner() + monkeypatch.setattr( + discover_module, + "build_default_pipeline", + lambda root, languages=None: _FakePipeline(_report(root)), + ) + monkeypatch.setattr( + discover_module, + "group_items", + lambda items: [_draft_feature("user-authentication", 1)], + ) + + with runner.isolated_filesystem(temp_dir=tmp_path): + result = runner.invoke(cli, ["discover", "--format", "json"]) + assert result.exit_code == 0 + assert Path(".specleft/specs/_discovered/user-authentication.md").exists() + + def test_discover_respects_output_dir_override( + self, monkeypatch: pytest.MonkeyPatch, tmp_path: Path + ) -> None: + runner = CliRunner() + monkeypatch.setattr( + discover_module, + "build_default_pipeline", + lambda root, languages=None: _FakePipeline(_report(root)), + ) + monkeypatch.setattr( + discover_module, + "group_items", + lambda items: [_draft_feature("user-authentication", 1)], + ) + + with runner.isolated_filesystem(temp_dir=tmp_path): + result = runner.invoke( + cli, + ["discover", "--format", "json", "--output-dir", "tmp/out"], + ) + assert result.exit_code == 0 + assert Path("tmp/out/user-authentication.md").exists() + + def test_discover_promote_all(self, tmp_path: Path) -> None: + runner = CliRunner() + with runner.isolated_filesystem(temp_dir=tmp_path): + source_dir = Path(".specleft/specs/_discovered") + destination_dir = Path(".specleft/specs") + source_dir.mkdir(parents=True, exist_ok=True) + destination_dir.mkdir(parents=True, exist_ok=True) + (source_dir / "user-authentication.md").write_text("# Feature: User Auth\n") + (source_dir / "payments.md").write_text("# Feature: Payments\n") + + result = runner.invoke( + cli, + ["discover", "promote", "--all", "--format", "json"], + ) + assert result.exit_code == 0 + payload = json.loads(result.output) + assert len(payload["promoted"]) == 2 + assert (destination_dir / "user-authentication.md").exists() + assert (destination_dir / "payments.md").exists() + + def test_discover_promote_selected_feature(self, tmp_path: Path) -> None: + runner = CliRunner() + with runner.isolated_filesystem(temp_dir=tmp_path): + source_dir = Path(".specleft/specs/_discovered") + destination_dir = Path(".specleft/specs") + source_dir.mkdir(parents=True, exist_ok=True) + destination_dir.mkdir(parents=True, exist_ok=True) + (source_dir / "user-authentication.md").write_text("# Feature: User Auth\n") + (source_dir / "payments.md").write_text("# Feature: Payments\n") + + result = runner.invoke( + cli, + ["discover", "promote", "user-authentication", "--format", "json"], + ) + assert result.exit_code == 0 + payload = json.loads(result.output) + assert len(payload["promoted"]) == 1 + assert (destination_dir / "user-authentication.md").exists() + assert (destination_dir / "payments.md").exists() is False + + def test_discover_promote_skips_existing_unless_overwrite( + self, tmp_path: Path + ) -> None: + runner = CliRunner() + with runner.isolated_filesystem(temp_dir=tmp_path): + source_dir = Path(".specleft/specs/_discovered") + destination_dir = Path(".specleft/specs") + source_dir.mkdir(parents=True, exist_ok=True) + destination_dir.mkdir(parents=True, exist_ok=True) + (source_dir / "user-authentication.md").write_text("# Feature: New\n") + existing = destination_dir / "user-authentication.md" + existing.write_text("# Feature: Existing\n") + + result = runner.invoke( + cli, + ["discover", "promote", "user-authentication", "--format", "json"], + ) + assert result.exit_code == 0 + payload = json.loads(result.output) + assert payload["promoted"] == [] + assert len(payload["skipped"]) == 1 + assert existing.read_text() == "# Feature: Existing\n" + + overwrite_result = runner.invoke( + cli, + [ + "discover", + "promote", + "user-authentication", + "--overwrite", + "--format", + "json", + ], + ) + overwrite_payload = json.loads(overwrite_result.output) + assert overwrite_result.exit_code == 0 + assert len(overwrite_payload["promoted"]) == 1 + assert existing.read_text() == "# Feature: New\n" + + def test_discover_json_contains_errors_when_miner_fails( + self, monkeypatch: pytest.MonkeyPatch, tmp_path: Path + ) -> None: + runner = CliRunner() + monkeypatch.setattr( + discover_module, + "build_default_pipeline", + lambda root, languages=None: _FakePipeline( + _report(root, errors=["git_history: git binary missing"]) + ), + ) + monkeypatch.setattr(discover_module, "group_items", lambda items: []) + + with runner.isolated_filesystem(temp_dir=tmp_path): + result = runner.invoke(cli, ["discover", "--format", "json"]) + assert result.exit_code == 0 + payload = json.loads(result.output) + assert payload["errors"] == ["git_history: git binary missing"] + + def test_discover_json_valid_when_all_miners_error( + self, monkeypatch: pytest.MonkeyPatch, tmp_path: Path + ) -> None: + runner = CliRunner() + monkeypatch.setattr( + discover_module, + "build_default_pipeline", + lambda root, languages=None: _FakePipeline( + _report(root, errors=["miner_a: boom", "miner_b: boom"], languages=[]) + ), + ) + monkeypatch.setattr(discover_module, "group_items", lambda items: []) + + with runner.isolated_filesystem(temp_dir=tmp_path): + result = runner.invoke(cli, ["discover", "--format", "json"]) + assert result.exit_code == 0 + payload = json.loads(result.output) + assert payload["features"] == [] + assert payload["total_features"] == 0 + assert payload["total_scenarios"] == 0 + assert payload["errors"] == ["miner_a: boom", "miner_b: boom"]