HYPERFLEET-759: Add Bill of Artifacts for HyperFleet MVP milestone#112
HYPERFLEET-759: Add Bill of Artifacts for HyperFleet MVP milestone#112tirthct wants to merge 2 commits intoopenshift-hyperfleet:mainfrom
Conversation
|
@tirthct: This pull request references HYPERFLEET-759 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.22.0" version, but no target version was set. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
WalkthroughA new documentation file is added to inventory the HyperFleet MVP deliverables. The bill of artifacts catalogs core platform services, supporting tools, API specifications, infrastructure components, testing assets, CI/CD infrastructure, architecture documentation, integration points, architectural decisions, delivery milestones, and a repository table. No code changes or functional modifications are introduced. Estimated code review effort🎯 1 (Trivial) | ⏱️ ~5 minutes 🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
|
@tirthct: This pull request references HYPERFLEET-759 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.22.0" version, but no target version was set. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
@tirthct: This pull request references HYPERFLEET-759 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.22.0" version, but no target version was set. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
| | | | | ||
| |---|---| | ||
| | **Repository** | [hyperfleet-api](https://github.com/openshift-hyperfleet/hyperfleet-api) | | ||
| | **Language** | Go 1.24+ | |
There was a problem hiding this comment.
API should update to GO 1.24. I opened https://redhat.atlassian.net/browse/HYPERFLEET-815 to track it.
| - CEL-based configurable decision engine with generation-based (immediate on spec change), time-based (max age for periodic reconciliation), and new-resource detection logic | ||
| - Publishes CloudEvents v1 with CEL (Common Expression Language) for both decision logic and dynamic payload building, plus W3C trace propagation | ||
| - Horizontal sharding via config-driven label selectors — no leader election needed | ||
| - Broker abstraction: GCP Pub/Sub, RabbitMQ, and Stub backends via the `hyperfleet-broker` library |
There was a problem hiding this comment.
Can we add an internal link to hyperfleet-broker?
Something like:
| - Broker abstraction: GCP Pub/Sub, RabbitMQ, and Stub backends via the `hyperfleet-broker` library | |
| - Broker abstraction: GCP Pub/Sub, RabbitMQ, and Stub backends via the `[hyperfleet-broker](#14-hyperfleet-broker)` library |
|
|
||
| Configuration-driven framework for executing provisioning tasks. Single binary, infinite configurations — you write YAML, not Go code. | ||
|
|
||
| - Four-phase execution pipeline: Param Extraction, Precondition Evaluation (structured conditions or CEL), Resource Application, Status Reporting |
There was a problem hiding this comment.
Can you update sentinel.md in this PR also? There's an inconsistency. sentinel.md says that it uses Go templates, not CEL.
| | **Language** | Go 1.25 | | ||
| | **State** | Active, production-ready | | ||
|
|
||
| Kubernetes-native reconciliation trigger service implementing a poll-decide-publish loop. The orchestration brain of HyperFleet. |
There was a problem hiding this comment.
"Kubernetes-native reconciliation trigger service" may give the impression that the Sentinel
uses the Kubernetes controller pattern (informers, watches, controller-runtime). The
sentinel.md explicitly describes it as: "No Kubernetes controller pattern, just
periodic polling."
The Sentinel is a stateless Go service that periodically polls the REST API — it runs on
Kubernetes but does not use Kubernetes-native patterns.
Suggested fix:
| Kubernetes-native reconciliation trigger service implementing a poll-decide-publish loop. The orchestration brain of HyperFleet. | |
| Stateless reconciliation trigger service implementing a poll-decide-publish loop. The orchestration brain of HyperFleet. |
|
|
||
| - CEL-based configurable decision engine with generation-based (immediate on spec change), time-based (max age for periodic reconciliation), and new-resource detection logic | ||
| - Publishes CloudEvents v1 with CEL (Common Expression Language) for both decision logic and dynamic payload building, plus W3C trace propagation | ||
| - Horizontal sharding via config-driven label selectors — no leader election needed |
There was a problem hiding this comment.
Category: Architecture
"Horizontal sharding via config-driven label selectors" may overstate the capability. The
sentinel.md explicitly says this is NOT true sharding — it's label-based
filtering with no coordination between instances, meaning gaps and overlaps are possible.
| - Horizontal sharding via config-driven label selectors — no leader election needed | |
| - Horizontal scaling via config-driven label selectors for workload partitioning — no leader election needed |
This preserves the intent while avoiding the "sharding" term that implies coordination guarantees the Sentinel does not provide.
|
|
||
| Configuration-driven framework for executing provisioning tasks. Single binary, infinite configurations — you write YAML, not Go code. | ||
|
|
||
| - Four-phase execution pipeline: Param Extraction, Precondition Evaluation (structured conditions or CEL), Resource Application, Status Reporting |
There was a problem hiding this comment.
Category: Inconsistency
The fourth phase of the adapter pipeline is called "Post Actions" in the code
(PhasePostActions), not "Status Reporting." Post Actions is a more general concept — it
includes conditional API calls driven by when expressions, of which status reporting is just
one use case.
Suggested fix:
| - Four-phase execution pipeline: Param Extraction, Precondition Evaluation (structured conditions or CEL), Resource Application, Status Reporting | |
| - Four-phase execution pipeline: Param Extraction, Precondition Evaluation (structured conditions or CEL), Resource Application, Post Actions (conditional API calls including status reporting) |
|
The artifact descriptions use different formats across sections — some have a metadata table + Suggested template for each core component: ### 1.1 HyperFleet API
| Field | Value |
|-------|-------|
| Repository | [hyperfleet-api](https://github.com/openshift-hyperfleet/hyperfleet-api) |
| Language | Go 1.24+ |
| State | Active, production-ready |
| Helm Chart | v1.0.0 |
| Container Image | `quay.io/openshift-hyperfleet/hyperfleet-api` |
Stateless REST API serving as the pure CRUD data layer...
#### Key Capabilities
- REST operations covering Cluster and NodePool resources
- PostgreSQL database with GORM ORM
- ...This also brings the Helm chart version and container image closer to the component they The supporting services table (section 2) is fine as a summary table since they are not key |
|
|
||
| --- | ||
|
|
||
| ## 11. Repository Summary |
There was a problem hiding this comment.
If we update the template to use this format, we can remove this Repository Summary section.
|
|
||
| --- | ||
|
|
||
| ## 9. Key Architectural Decisions |
There was a problem hiding this comment.
Sections 9 (Key Architectural Decisions) and 10 (Delivery Milestones) feel out of scope for a
Bill of Artifacts. A BoA is an inventory of what was delivered — components, contracts,
infrastructure, docs. Architectural decisions belong in ADRs, and delivery milestones belong
in a project status report or roadmap.
The JIRA ticket reinforces this: "comprehensive inventory of all components, services,
tooling, documentation, and infrastructure built." It doesn't ask for timeline or decision
rationale.
Consider removing sections 9 and 10 and linking to existing docs instead (architecture decisions are
This keeps the document focused on its stated purpose and avoids duplication with existing
docs.
| | 8 | [hyperfleet-api-spec](https://github.com/openshift-hyperfleet/hyperfleet-api-spec) | API Contract | TypeSpec | | ||
| | 9 | [hyperfleet-infra](https://github.com/openshift-hyperfleet/hyperfleet-infra) | Infrastructure | Terraform/Helm | | ||
| | 10 | [hyperfleet-e2e](https://github.com/openshift-hyperfleet/hyperfleet-e2e) | Testing | Go | | ||
| | 11 | [architecture](https://github.com/openshift-hyperfleet/architecture) | Documentation | Markdown | |
There was a problem hiding this comment.
I think we can add a Changelog for this BoA. This will be constantly updated for post-MVP.
Change Log
| Date | Version | Change | Author |
|---|---|---|---|
| 2026-03-25 | 1.0 | Initial Bill of Artifacts | Tirth Chetan Thakkar |
|
|
||
| --- | ||
|
|
||
| ## 7. Architecture Documentation |
There was a problem hiding this comment.
This whole section seems redundant as it lists every subdirectory of the architecture repo. For a "BoA", we can remove this section too.
|
|
||
| --- | ||
|
|
||
| ## 6. CI/CD and Release Infrastructure |
There was a problem hiding this comment.
Section 6 (CI/CD and Release Infrastructure) feels thin compared to the other sections — just
3 generic bullet points. For a Bill of Artifacts, CI/CD pipelines are concrete deliverables
and deserve the same level of detail as the core services.
Consider expanding with specifics, for example:
## 6. CI/CD and Release Infrastructure
| Field | Value |
|-------|-------|
| Reference | [hyperfleet-release-process.md](https://github.com/openshift-hyperfleet/architec
ture/blob/main/hyperfleet/docs/hyperfleet-release-process.md) |
### 6.1 Prow CI
Primary CI system for core services. Presubmit and postsubmit jobs across 7 repositories
covering unit tests, integration tests, linting, and Helm chart validation. Container images
built via multi-stage Docker builds and published to `quay.io/openshift-hyperfleet/`.
### 6.2 Konflux/RHTAP
Tekton-based CI/CD pipelines for registry-credentials-service.
### 6.3 Release Process
Hybrid cadence with independent component versioning. Release branches with forward-port
workflow, multi-gate readiness criteria. Validated releases represent compatibility-tested
version combinations across all core services.This brings the section in line with the level of detail in other sections and gives
stakeholders a clearer picture of what was actually built.
|
|
||
| Black-box E2E testing framework for validating Critical User Journeys (CUJ). Ginkgo-based with ephemeral resource management, parallel execution, label-based filtering, JUnit XML reports, and container image support for CI. | ||
|
|
||
| **Test Suites:** |
There was a problem hiding this comment.
The E2E framework uses a well-defined tier classification for test severity (defined in
pkg/labels/labels.go), but the BoA test suites table doesn't mention it. Since tier
classification is part of the CI gate policy (Tier 0 blocks releases), it's a meaningful
deliverable worth surfacing.
Consider adding a Tier column to the test suites table:
| Suite | Tier | What it validates |
|-------|------|-------------------|
| Cluster Creation | Tier 0 | End-to-end cluster lifecycle: creation, initial conditions,
adapter execution, final Ready state |
| NodePool Creation | Tier 0 | End-to-end nodepool lifecycle: creation under a parent cluster,
adapter execution, Ready state |
| Adapter with Maestro Transport | Tier 0 | Full Maestro transport path: ManifestWork
creation, agent applies to target cluster, status report |
| Cluster Concurrent Creation | Tier 1 | 5 simultaneous cluster creations reach Ready state
without resource conflicts |
| NodePool Concurrent Creation | Tier 1 | 3 simultaneous nodepools under the same cluster
reach Ready state |
| Cluster Adapter Failure | Tier 1 | Adapter precondition failures are reflected in cluster
top-level status |
| Adapter Failover | Tier 1 | Adapter framework detects invalid Kubernetes resources and
reports failures |This makes the quality gate policy visible to stakeholders without needing to dig into the
code.
|
|
||
| **Additional:** RabbitMQ dev manifest, broker Helm values generator script, full lifecycle Makefile targets (`install-all`, `uninstall-all`, `status`). | ||
|
|
||
| ### 4.2 Per-Component Helm Charts |
There was a problem hiding this comment.
Once we adopt the proposed layout, we can remove section 4.2
|
|
||
| ## 3. API Contracts and Specifications | ||
|
|
||
| ### 3.1 HyperFleet API Spec (TypeSpec) |
There was a problem hiding this comment.
Section 3.1 references two different locations for the API contract, which could confuse
readers:
hyperfleet-api-specrepo as the TypeSpec source (version 1.0.2)hyperfleet-api/openapi/openapi.yamlas the "production OpenAPI contract used by code
generation"
For a BoA, it's worth clarifying the relationship — which is the source of truth and which is
derived:
### 3.1 HyperFleet API Spec (TypeSpec)
| Field | Value |
|-------|-------|
| Repository |
[hyperfleet-api-spec](https://github.com/openshift-hyperfleet/hyperfleet-api-spec) |
| Language | TypeSpec |
| Version | 1.0.2 |
| Generated Artifact | [openapi.yaml](https://github.com/openshift-hyperfleet/hyperfleet-api/b
lob/main/openapi/openapi.yaml) (committed to hyperfleet-api) |This makes the pipeline clear: TypeSpec (source) → OpenAPI (generated) → code generation.
|
|
||
| --- | ||
|
|
||
| ## 2. Supporting Services and Tools |
There was a problem hiding this comment.
Section 2 (Supporting Services and Tools) lists three tools in a compact table but doesn't say
who uses them or why they exist. For an external stakeholder, it's hard to understand the
purpose of each tool without that context.
Consider adding a "Used by" column:
| Repository | Description | Language | Used by |
|---|---|---|---|
| maestro-cli | CLI for ManifestWork lifecycle management through Maestro. Dual-protocol: gRPC + HTTP. | Go 1.25 | Adapter (Maestro transport), developers for debugging |
| hyperfleet-credential-provider | Multi-cloud Kubernetes ExecCredential plugin for GCP/GKE, AWS/EKS, Azure/AKS authentication. Pure Go, no cloud CLI dependencies. | Go 1.24 | E2E test pipeline (Prow CI cluster authentication) |
I don't think registry-credentials-service is used by hyperfleet, so we can remove it.
Also, unlike sections 1 and 3, section 2 has no State column. Adding it keeps the format consistent:
| Repository | Description | Language | Used by |
| | Adapter Failover | Adapter framework detects invalid Kubernetes resources and reports failures with clear error messages | | ||
| | Adapter with Maestro Transport | Full Maestro transport path: ManifestWork creation, Maestro agent applies to target cluster, adapter reports status back via discovery | | ||
|
|
||
| ### 5.2 Per-Component Testing |
There was a problem hiding this comment.
The Per-Component Testing table (section 5.2) includes very specific numbers that will go
stale quickly:
- "~65-75% coverage target"
- "~30-40s for 10 suites (24 test cases)"
- "8+ value combinations"
- "10+ scenarios (PDB, RabbitMQ, Pub/Sub, PodMonitoring, PrometheusRule)"
- "9+ scenarios (broker, API config, PDB, autoscaling, probes)"
A Bill of Artifacts should describe what was delivered, not runtime metrics that change with
every PR. Consider either removing the specific numbers or replacing them with qualitative
descriptions:
-| hyperfleet-adapter | ... | 9+ scenarios (broker, API config, PDB, autoscaling, probes) | ~65-75% coverage target, ~30-40s for 10 suites (24 test cases) |
+| hyperfleet-adapter | ... | Multiple scenarios covering broker, API config, PDB, autoscaling, probes | Dual integration strategy: envtest (CI) and K3s (local) |This avoids creating a maintenance burden where the document drifts from reality after a few
sprints.
|
About the PR description: The Test Plan section uses a code-focused template (unit tests,
|
Summary
hyperfleet/mvp/bill-of-artifacts.md) capturing all deliverables from the HyperFleet MVP milestonemilestones, and repository summary
What's included
Design decisions
Test Plan
make test-allpassesmake lintpassesmake test-helm(if applicable)Summary by CodeRabbit