Skip to content

feat(must-gather): add new Chart for the RHDH Must-Gather tool [RHIDP-12626]#326

Open
rm3l wants to merge 35 commits intoredhat-developer:mainfrom
rm3l:rhidp-12626-add-new-helm-chart-for-rhdh-must-gather-for-easier-consumption-against-supported-non-ocp-platforms
Open

feat(must-gather): add new Chart for the RHDH Must-Gather tool [RHIDP-12626]#326
rm3l wants to merge 35 commits intoredhat-developer:mainfrom
rm3l:rhidp-12626-add-new-helm-chart-for-rhdh-must-gather-for-easier-consumption-against-supported-non-ocp-platforms

Conversation

@rm3l
Copy link
Copy Markdown
Member

@rm3l rm3l commented Mar 11, 2026

Description of the change

This is to allow for easier consumption against supported non-OCP platforms, as we are preparing the tool for TP.

Which issue(s) does this PR fix or relate to

How to test changes / Special notes to the reviewer

Just deploy this chart using helm:

helm upgrade --install my-rhdh-mg charts/must-gather

Checklist

  • For each Chart updated, version bumped in the corresponding Chart.yaml according to Semantic Versioning.
  • For each Chart updated, variables are documented in the values.yaml and added to the corresponding README.md. The pre-commit utility can be used to generate the necessary content. Run pre-commit run --all-files to run the hooks and then push any resulting changes. The pre-commit Workflow will enforce this and warn you if needed.
  • JSON Schema template updated and re-generated the raw schema via the pre-commit hook.
  • Tests pass using the Chart Testing tool and the ct lint command.
  • If you updated the orchestrator-infra chart, make sure the versions of the Knative CRDs are aligned with the versions of the CRDs installed by the OpenShift Serverless operators declared in the values.yaml file. See Installing Knative Eventing and Knative Serving CRDs for more details.

@rm3l rm3l changed the title feat(must-gather): add new Chart for the RHDH Must-Gather tool feat(must-gather): add new Chart for the RHDH Must-Gather tool [RHIDP-12626] Mar 11, 2026
@rhdh-qodo-merge
Copy link
Copy Markdown

rhdh-qodo-merge bot commented Mar 11, 2026

PR Reviewer Guide 🔍

(Review updated until commit 3751e9e)

Here are some key observations to aid the review process:

🎫 Ticket compliance analysis 🔶

RHIDP-12626 - Partially compliant

Compliant requirements:

  • Provide an easier way to run the RHDH must-gather on non-OCP Kubernetes (avoid requiring users to apply a Kustomize project from the GitHub repo).
  • Ship a new Helm Chart for the must-gather so customers can consume it from the official chart repository.
  • Keep OpenShift customers using oc adm must-gather --image (Helm chart is primarily for non-OCP usage).

Non-compliant requirements:

Requires further human verification:

  • Validate the chart can be consumed/published as intended from the official repository (release/publishing pipeline + index).
  • Validate the deployed must-gather output is complete/expected on representative non-OCP clusters (e.g., EKS/GKE/AKS) and that the retrieval instructions work as documented.
⏱️ Estimated effort to review: 3 🔵🔵🔵⚪⚪
🔒 Security concerns

Secrets handling:
The chart supports enabling secrets collection via gather.withSecrets. While it defaults to false, enabling it can collect sensitive data. Ensure the must-gather image actually sanitizes secrets as implied by NOTES.txt, and consider documenting any limitations and recommending least-privilege RBAC (especially when rbac.scope is cluster).

⚡ Recommended focus areas for review

Possible Issue

The template sets $effectiveNamespaces := .Values.gather.namespaces | default list. In Helm, default expects the default value, but list here is a function reference rather than an invoked empty list. This may render incorrectly and/or make the subsequent if $effectiveNamespaces always truthy, unintentionally adding --namespaces/args. Consider changing to | default (list) (or otherwise ensuring an actual empty list is produced).

{{- $nsScope := ne (.Values.rbac.scope | default "cluster") "cluster" }}
{{- $effectiveNamespaces := .Values.gather.namespaces | default list }}
{{- if $nsScope }}
{{- $effectiveNamespaces = list .Release.Namespace }}
{{- end }}
{{- if or $nsScope .Values.gather.withSecrets .Values.gather.withHeapDumps .Values.gather.clusterInfo .Values.gather.withoutOperator .Values.gather.withoutOrchestrator .Values.gather.withoutHelm .Values.gather.withoutPlatform .Values.gather.withoutRoute .Values.gather.withoutIngress .Values.gather.withoutNamespaceInspect $effectiveNamespaces .Values.gather.extraArgs }}
args:
  {{- if .Values.gather.withSecrets }}
  - "--with-secrets"
  {{- end }}
  {{- if .Values.gather.withHeapDumps }}
  - "--with-heap-dumps"
  {{- end }}
  {{- if .Values.gather.clusterInfo }}
  - "--cluster-info"
  {{- end }}
  {{- if .Values.gather.withoutOperator }}
  - "--without-operator"
  {{- end }}
  {{- if .Values.gather.withoutOrchestrator }}
  - "--without-orchestrator"
  {{- end }}
  {{- if .Values.gather.withoutHelm }}
  - "--without-helm"
  {{- end }}
  {{- if or .Values.gather.withoutPlatform $nsScope }}
  - "--without-platform"
  {{- end }}
  {{- if .Values.gather.withoutRoute }}
  - "--without-route"
  {{- end }}
  {{- if .Values.gather.withoutIngress }}
  - "--without-ingress"
  {{- end }}
  {{- if .Values.gather.withoutNamespaceInspect }}
  - "--without-namespace-inspect"
  {{- end }}
  {{- if $effectiveNamespaces }}
  - "--namespaces"
  - {{ $effectiveNamespaces | join "," | quote }}
  {{- end }}
  {{- with .Values.gather.extraArgs }}
  {{- toYaml . | nindent 12 }}
  {{- end }}
{{- end }}
Compatibility

The pod depends on the kube-root-ca.crt ConfigMap in the release namespace via the projected kube-api-access volume. While common on many clusters, confirm this is consistently present on the supported non-OCP platforms/versions, or consider making this more robust (e.g., optional, or relying on the projected service account token volume defaults when possible).

volumes:
  - name: kube-api-access
    projected:
      defaultMode: 0444
      sources:
        - serviceAccountToken:
            expirationSeconds: {{ .Values.serviceAccount.tokenExpirationSeconds | default 3600 }}
            path: token
        - configMap:
            name: kube-root-ca.crt
            items:
              - key: ca.crt
                path: ca.crt
        - downwardAPI:
            items:
              - path: namespace
                fieldRef:
                  fieldPath: metadata.namespace
  - name: output
📚 Focus areas based on broader codebase context

Docs Bug

The README suggests configuring the test image via --set test.image=<image>, but the chart values structure models test.image as an object with fields (e.g., registry, repository, tag, digest, pullPolicy). This likely won’t work as intended with Helm and should be updated to document setting the specific subfields (e.g., --set test.image.repository=... and --set test.image.tag=...). (Ref 4)

```sh
helm install <release_name> <repo> \
  --set test.image=<image>

**Reference reasoning:** The existing chart values patterns in the repo define `test.image` as a nested object with explicit fields, indicating that configuration should be done via those sub-keys rather than replacing `test.image` with a scalar string.

</details>

</td></tr>
<tr><td>

<details><summary>📄 References</summary><ol><li><a href="https://github.com/redhat-developer/rhdh-chart/blob/0212d7c/ct-install.yaml/#L1-L11">redhat-developer/rhdh-chart/ct-install.yaml [1-11]</a></li>

<li><a href="https://github.com/redhat-developer/rhdh-chart/blob/0212d7c/ct.yaml/#L1-L10">redhat-developer/rhdh-chart/ct.yaml [1-10]</a></li>

<li><a href="https://github.com/redhat-developer/rhdh-chart/blob/0212d7c/cr.yaml/#L1-L2">redhat-developer/rhdh-chart/cr.yaml [1-2]</a></li>

<li><a href="https://github.com/redhat-developer/rhdh-chart/blob/0212d7c/charts/backstage/values.yaml/#L342-L356">redhat-developer/rhdh-chart/charts/backstage/values.yaml [342-356]</a></li>

<li><a href="https://github.com/redhat-developer/rhdh-chart/blob/0212d7c/charts/backstage/templates/tests/test-secret.yaml/#L1-L3">redhat-developer/rhdh-chart/charts/backstage/templates/tests/test-secret.yaml [1-3]</a></li>

<li><a href="https://github.com/redhat-developer/rhdh-chart/blob/0212d7c/charts/orchestrator-software-templates-infra/templates/tests/infra-test.yaml/#L1-L1">redhat-developer/rhdh-chart/charts/orchestrator-software-templates-infra/templates/tests/infra-test.yaml [0-2]</a></li>

<li><a href="https://github.com/redhat-developer/rhdh-chart/blob/0212d7c/charts/orchestrator-infra/templates/tests/infra-test.yaml/#L1-L1">redhat-developer/rhdh-chart/charts/orchestrator-infra/templates/tests/infra-test.yaml [0-2]</a></li>

<li><a href="https://github.com/redhat-developer/rhdh-chart/blob/0212d7c/charts/orchestrator-software-templates-infra/Chart.yaml/#L1-L17">redhat-developer/rhdh-chart/charts/orchestrator-software-templates-infra/Chart.yaml [1-17]</a></li>

</ol></details>

</td></tr>
</table>

@rhdh-qodo-merge rhdh-qodo-merge bot added the enhancement New feature or request label Mar 11, 2026
@rhdh-qodo-merge
Copy link
Copy Markdown

rhdh-qodo-merge bot commented Mar 11, 2026

PR Type

(Describe updated until commit 0669dbe)

Enhancement, Tests, Documentation


Description

  • Introduces a new Helm chart for the RHDH Must-Gather diagnostic tool to enable easier consumption against supported non-OCP platforms in preparation for Technical Preview

  • Implements complete Kubernetes Job template (job.yaml) for running gather operations with configurable timeout, retry limits, and automatic cleanup via TTL

  • Creates PersistentVolumeClaim template for persistent storage of gathered diagnostic data

  • Defines comprehensive RBAC configuration with ClusterRole and ClusterRoleBinding granting permissions for core Kubernetes resources, RHDH-specific backstages resources, and Orchestrator components (SonataFlow, Knative, Serverless)

  • Adds data retriever pod template (data-retriever-pod.yaml) for collecting gathered diagnostic data from the PVC with node affinity configuration

  • Includes Helm test template (test.yaml) with ServiceAccount, Role, and RoleBinding for validating chart deployment and verifying gathered data integrity

  • Provides comprehensive documentation with README, NOTES.txt, and values documentation covering installation, testing, and configuration procedures

  • Auto-generates JSON schema files (values.schema.json and values.schema.tmpl.json) for Helm values validation with detailed type definitions and descriptions

  • Defines default configuration values (values.yaml) for image settings, RBAC, job parameters, gather options, persistence, and resource limits

  • Includes helper template functions (_helpers.tpl) for chart name, labels, service account, image building, and unique run ID generation

  • Provides six CI test configurations covering default values, disabled data retriever, minimal collection, namespace-scoped collection, secrets/cluster info gathering, and disabled Helm tests

  • Bumps chart version to 0.1.0 with appVersion 1.0.0 and requires Kubernetes >= 1.27.0-0


File Walkthrough

Relevant files
Configuration changes
4 files
values.schema.json
Auto-generated JSON schema for must-gather Helm chart values

charts/must-gather/values.schema.json

  • Generated comprehensive JSON schema file with 1568 lines defining all
    Helm chart values
  • Includes detailed schema definitions for affinity, pod security
    context, resources, persistence, and other Kubernetes configurations
  • Provides type validation, default values, and descriptions for all
    configurable parameters
  • Replaces empty placeholder file with complete schema auto-generated
    from values template
+1568/-0
values.schema.tmpl.json
JSON schema template for Helm values validation                   

charts/must-gather/values.schema.tmpl.json

  • New template file for JSON schema generation with 535 lines
  • Defines schema structure with references to Kubernetes JSON schema
    definitions
  • Includes custom properties for image, job, gather, persistence, and
    dataRetriever configurations
  • Uses external schema references for complex Kubernetes types like
    SecurityContext and Affinity
+535/-0 
values.yaml
Default values for must-gather Helm chart                               

charts/must-gather/values.yaml

  • New values file with 163 lines defining all default configuration
    parameters
  • Includes image configuration, RBAC, job settings, gather options, and
    resource limits
  • Configures persistence, data retriever pod, and Helm test settings
  • Provides pod security context, node selector, tolerations, and
    affinity defaults
+163/-0 
.helmignore
Helm chart ignore patterns configuration                                 

charts/must-gather/.helmignore

  • Standard Helm ignore patterns for common VCS and IDE files
  • Excludes backup files, temporary files, and development environment
    directories
+23/-0   
Documentation
3 files
README.md
Documentation for RHDH must-gather Helm chart                       

charts/must-gather/README.md

  • New comprehensive README with 137 lines documenting the must-gather
    Helm chart
  • Includes TL;DR installation instructions, testing procedures, and
    uninstall guidance
  • Documents all 30+ configurable values with descriptions, types, and
    defaults
  • Provides examples for disabling test pod and customizing test image
+137/-0 
NOTES.txt
Helm deployment notes and user instructions                           

charts/must-gather/templates/NOTES.txt

  • Provides post-deployment instructions for monitoring the must-gather
    job
  • Includes commands to check job logs and wait for completion
  • Offers conditional instructions based on dataRetriever.enabled flag
  • Displays configuration summary (log level, timeout, storage size,
    namespaces, etc.)
  • References GitHub repository for additional information
+47/-0   
README.md.gotmpl
Helm chart README with usage documentation                             

charts/must-gather/README.md.gotmpl

  • Provides comprehensive chart documentation with TL;DR installation
    command
  • Documents testing procedures using helm test command
  • Includes examples for disabling test pod and customizing test image
  • Explains uninstallation process
  • References values.yaml for configuration parameters
+81/-0   
Tests
7 files
test.yaml
Helm test template for must-gather chart validation           

charts/must-gather/templates/tests/test.yaml

  • New Helm test template with 133 lines for validating chart deployment
  • Creates ServiceAccount, Role, and RoleBinding for test pod with
    appropriate permissions
  • Implements test Pod that waits for gather job completion and validates
    data retriever output
  • Includes tar archive validation to ensure gathered data is not empty
+133/-0 
default-values.yaml
Default values test configuration                                               

charts/must-gather/ci/default-values.yaml

  • Placeholder file for default values testing in CI pipeline
+1/-0     
with-data-retriever-disabled-values.yaml
Data retriever disabled test configuration                             

charts/must-gather/ci/with-data-retriever-disabled-values.yaml

  • Test configuration disabling the data retriever component
  • Allows testing scenarios where users retrieve data directly from PVC
+3/-0     
with-minimal-collection-values.yaml
Minimal collection test configuration                                       

charts/must-gather/ci/with-minimal-collection-values.yaml

  • Test configuration for minimal diagnostic collection
  • Disables optional components (operator, orchestrator, helm, route,
    ingress)
  • Sets log level to TRACE for detailed output
+8/-0     
with-namespace-scoped-values.yaml
Namespace-scoped collection test configuration                     

charts/must-gather/ci/with-namespace-scoped-values.yaml

  • Test configuration for namespace-scoped collection
  • Targets specific namespaces (rhdh-prod, rhdh-staging)
  • Sets collection time window to last 2 hours
+6/-0     
with-secrets-and-cluster-info-values.yaml
Secrets and cluster info test configuration                           

charts/must-gather/ci/with-secrets-and-cluster-info-values.yaml

  • Test configuration enabling secrets collection (sanitized)
  • Enables cluster-level information gathering
  • Sets log level to DEBUG for troubleshooting
+5/-0     
with-test-disabled-values.yaml
Helm test disabled test configuration                                       

charts/must-gather/ci/with-test-disabled-values.yaml

  • Test configuration disabling the Helm test pod
  • Allows testing scenarios without test suite execution
+3/-0     
Enhancement
7 files
job.yaml
Kubernetes Job template for must-gather execution               

charts/must-gather/templates/job.yaml

  • New Kubernetes Job template with 125 lines for running the gather
    operation
  • Configures job with timeout, retry limits, and TTL for automatic
    cleanup
  • Mounts persistent volume for output storage and passes gather script
    arguments
  • Includes environment variables for logging, timeouts, and optional
    collection features
+125/-0 
_helpers.tpl
Helm template helper functions for must-gather chart         

charts/must-gather/templates/_helpers.tpl

  • New Helm template helpers file with 111 lines of utility functions
  • Defines functions for chart name, fullname, labels, service account,
    and image building
  • Implements unique run ID generation based on timestamp for Job
    immutability workaround
  • Provides helper functions for data retriever pod naming and image
    reference construction
+111/-0 
pvc.yaml
PersistentVolumeClaim template for must-gather data storage

charts/must-gather/templates/pvc.yaml

  • New PersistentVolumeClaim template with 15 lines for data storage
  • Configures storage size, access mode, and optional storage class
  • Provides persistent storage for gathered diagnostic data
+15/-0   
serviceaccount.yaml
ServiceAccount template for must-gather Helm chart             

charts/must-gather/templates/serviceaccount.yaml

  • New ServiceAccount template with 13 lines for RBAC configuration
  • Creates service account with optional annotations and automount
    settings
  • Conditionally created based on serviceAccount.create configuration
+13/-0   
Chart.yaml
Initial Helm chart metadata for RHDH Must-Gather                 

charts/must-gather/Chart.yaml

  • New Helm chart metadata for RHDH Must-Gather diagnostic tool
  • Defines chart version 0.1.0 with appVersion 1.0.0
  • Specifies Kubernetes version requirement (>= 1.27.0-0)
  • Includes OpenShift chart annotations and maintainer information
+38/-0   
rbac.yaml
RBAC configuration for must-gather permissions                     

charts/must-gather/templates/rbac.yaml

  • Creates ClusterRole with permissions for reading Kubernetes core
    resources (pods, services, configmaps, secrets, etc.)
  • Adds permissions for apps, networking, RBAC, storage, and API
    extensions resources
  • Includes RHDH-specific permissions for backstages resources
  • Defines permissions for Orchestrator components (SonataFlow, Knative,
    Serverless)
  • Creates ClusterRoleBinding to bind the role to the service account
+61/-0   
data-retriever-pod.yaml
Data retriever pod template for output collection               

charts/must-gather/templates/data-retriever-pod.yaml

  • Defines a Pod for retrieving gathered diagnostic data from the PVC
  • Configures pod affinity to run on the same node as the gather job
  • Mounts the must-gather output PVC as read-only volume
  • Supports image customization via dataRetriever.image values
  • Includes security context and resource configuration options
+64/-0   

@rhdh-qodo-merge
Copy link
Copy Markdown

rhdh-qodo-merge bot commented Mar 11, 2026

PR Code Suggestions ✨

Explore these optional code suggestions:

CategorySuggestion                                                                                                                                    Impact
Security
Conditionally grant secret read permissions

Conditionally grant permissions to read secrets based on the
.Values.gather.withSecrets flag to adhere to the principle of least privilege.

charts/must-gather/templates/rbac.yaml [8-23]

 rules:
   - apiGroups: [""]
-    resources: ["*"]
+    resources:
+      - pods
+      - services
+      - endpoints
+      - persistentvolumeclaims
+      - configmaps
+      - serviceaccounts
+      - nodes
+      - namespaces
+      - events
     verbs: ["get", "list"]
+  {{- if .Values.gather.withSecrets }}
+  - apiGroups: [""]
+    resources: ["secrets"]
+    verbs: ["get", "list"]
+  {{- end }}
   - apiGroups: ["apps", "extensions"]
     resources: ["*"]
     verbs: ["get", "list"]
   - apiGroups: ["networking.k8s.io"]
     resources: ["*"]
     verbs: ["get", "list"]
   - apiGroups: ["rbac.authorization.k8s.io"]
     resources: ["*"]
     verbs: ["get", "list"]
   - apiGroups: ["storage.k8s.io"]
     resources: ["*"]
     verbs: ["get", "list"]
  • Apply / Chat
Suggestion importance[1-10]: 9

__

Why: This suggestion correctly identifies and resolves a security vulnerability where overly broad permissions are granted, violating the principle of least privilege. The fix is accurate and significantly improves the security of the chart.

High
General
Use chart appVersion for image tag

Change the default image.tag in values.yaml from "latest" to "" to allow it to
fall back to the chart's appVersion, improving versioning and reproducibility.

charts/must-gather/values.yaml [6-10]

 image:
   repository: quay.io/rhdh-community/rhdh-must-gather
   pullPolicy: IfNotPresent
   # -- -- Overrides the image tag whose default is the chart appVersion.
-  tag: "latest"
+  tag: ""
  • Apply / Chat
Suggestion importance[1-10]: 7

__

Why: This is a valid suggestion that aligns the chart with Helm best practices for image versioning, improving reproducibility and maintainability by defaulting the image tag to the chart's appVersion.

Medium
Quote storage size value

Add quotes around the .Values.persistence.size value in the PVC template to
ensure it is treated as a string by YAML.

charts/must-gather/templates/pvc.yaml [13-15]

 resources:
   requests:
-    storage: {{ .Values.persistence.size }}
+    storage: {{ .Values.persistence.size | quote }}

[Suggestion processed]

Suggestion importance[1-10]: 5

__

Why: The suggestion correctly recommends quoting the persistence.size value to ensure it's always treated as a string, which is a good practice in Helm templates to prevent potential parsing issues.

Low
Possible issue
Use POSIX -s for emptiness check

Replace the non-portable stat command in the test script with the
POSIX-compliant [ -s file ] operator to check if the output file exists and is
not empty.

charts/must-gather/templates/tests/test.yaml [89-94]

-size=$(stat -c%s /tmp/rhdh-must-gather-output.tar.gz 2>/dev/null || stat -f%z /tmp/rhdh-must-gather-output.tar.gz 2>/dev/null)
-if [ "$size" -le 0 ] 2>/dev/null; then
-  echo "FAIL: retrieved archive is empty"
+if [ ! -s /tmp/rhdh-must-gather-output.tar.gz ]; then
+  echo "FAIL: retrieved archive is empty or missing"
   exit 1
 fi
-echo "PASS: gathered data retrieved successfully (${size} bytes)"
+echo "PASS: gathered data retrieved successfully"
  • Apply / Chat
Suggestion importance[1-10]: 7

__

Why: This suggestion correctly identifies a portability issue with the stat command and proposes a more robust, POSIX-compliant solution using [ -s file ] that simplifies the test script and improves its reliability.

Medium
  • Update

@rm3l rm3l force-pushed the rhidp-12626-add-new-helm-chart-for-rhdh-must-gather-for-easier-consumption-against-supported-non-ocp-platforms branch 2 times, most recently from 799f3ab to 8b06a13 Compare March 11, 2026 17:26
@rm3l
Copy link
Copy Markdown
Member Author

rm3l commented Mar 12, 2026

Failed conditions
2 Security Hotspots

The Security issues reported by SonarCloud are related to the pods/exec RBAC permission. This permission is actually required by the must gather tool to collect data from running containers.

@rm3l rm3l marked this pull request as ready for review March 12, 2026 21:16
@rm3l rm3l requested a review from a team as a code owner March 12, 2026 21:16
@rhdh-qodo-merge
Copy link
Copy Markdown

PR Reviewer Guide 🔍

Here are some key observations to aid the review process:

🎫 Ticket compliance analysis 🔶

RHIDP-12626 - Partially compliant

Compliant requirements:

  • Provide a new Helm Chart for the RHDH must-gather tool to simplify usage on supported non-OCP Kubernetes platforms.

Non-compliant requirements:

  • Enable customers to consume the must-gather deployment from the official chart repository (charts.openshift.io / official charts distribution) rather than applying a Kustomize project from GitHub.

Requires further human verification:

  • Ensure OCP customers continue to use oc adm must-gather --image (confirm docs/release messaging; not verifiable from this diff alone).
  • Verify publication/availability of the chart in the intended official repository and that installation instructions match that distribution path.
⏱️ Estimated effort to review: 3 🔵🔵🔵⚪⚪
🔒 Security concerns

RBAC broadness / command execution:
charts/must-gather/templates/tests/test.yaml grants create on pods/exec to a test ServiceAccount and performs kubectl exec into the data-retriever pod. While this is namespaced and hook-scoped, pods/exec is a sensitive permission and can be disallowed by some cluster policies; ensure this is an intentional and documented requirement and that it cannot be abused beyond the intended pod selection.

⚡ Recommended focus areas for review

Schema mismatch

The generated schema and the template schema appear to disagree on the default for job.ttlSecondsAfterFinished (generated shows a numeric default while the template sets an empty-string default). This can lead to confusing validation behavior for users and tooling (e.g., helm lint, IDE validation) and should be reconciled so the template and generated schema reflect the same intended default and type.

"job": {
    "additionalProperties": false,
    "properties": {
        "activeDeadlineSeconds": {
            "default": 3600,
            "title": "Job timeout in seconds.",
            "type": "integer"
        },
        "backoffLimit": {
            "default": 3,
            "title": "Number of retries before marking job as failed.",
            "type": "integer"
        },
        "ttlSecondsAfterFinished": {
            "default": 600,
            "title": "TTL for automatic cleanup after job finishes (seconds). Set to a positive value to enable automatic cleanup.",
            "type": [
                "integer",
                "string"
            ]
        }
    },
    "title": "Job configuration.",
    "type": "object"
},
RBAC/exec usage

The Helm test hook creates a namespaced Role granting pods/exec and then uses kubectl exec to tar and stream /data from the data-retriever pod. Please validate that this is acceptable for the chart’s security posture (even if namespaced), and that the test pod image reliably contains the required tooling (kubectl, tar, stat) across environments; otherwise helm test may fail or be flagged by security scanners/policies.

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: {{ include "rhdh-must-gather.fullname" . }}-test
  labels:
    {{- include "rhdh-must-gather.labels" . | nindent 4 }}
    app.kubernetes.io/component: test
  annotations:
    helm.sh/hook: test
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
    helm.sh/hook-weight: "-1"
rules:
  - apiGroups: ["batch"]
    resources: ["jobs"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["pods"]
    verbs: ["get", "watch", "list"]
  - apiGroups: [""]
    resources: ["pods/exec"] # NOSONAR - exec is required to retrieve must-gather output from the data-retriever pod during helm test
    verbs: ["create"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: {{ include "rhdh-must-gather.fullname" . }}-test
  labels:
    {{- include "rhdh-must-gather.labels" . | nindent 4 }}
    app.kubernetes.io/component: test
  annotations:
    helm.sh/hook: test
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
    helm.sh/hook-weight: "-1"
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: {{ include "rhdh-must-gather.fullname" . }}-test
subjects:
  - kind: ServiceAccount
    name: {{ include "rhdh-must-gather.fullname" . }}-test
    namespace: {{ .Release.Namespace }}
---
apiVersion: v1
kind: Pod
metadata:
  name: {{ include "rhdh-must-gather.fullname" . }}-test
  labels:
    {{- include "rhdh-must-gather.labels" . | nindent 4 }}
    app.kubernetes.io/component: test
  annotations:
    helm.sh/hook: test
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
spec:
  serviceAccountName: {{ include "rhdh-must-gather.fullname" . }}-test
  restartPolicy: Never
  {{- with .Values.imagePullSecrets }}
  imagePullSecrets:
    {{- toYaml . | nindent 4 }}
  {{- end }}
  {{- with .Values.podSecurityContext }}
  securityContext:
    {{- toYaml . | nindent 4 }}
  {{- end }}
  containers:
    - name: test
      image: {{ include "rhdh-must-gather.image" (dict "image" .Values.test.image "defaultTag" "latest") | quote }} # NOSONAR
      imagePullPolicy: {{ .Values.test.image.pullPolicy }}
      {{- with .Values.securityContext }}
      securityContext:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      env:
        - name: NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: JOB_NAME
          value: {{ include "rhdh-must-gather.jobName" . }}
        - name: DATA_RETRIEVER_POD
          value: {{ include "rhdh-must-gather.dataRetrieverName" . }}
        - name: JOB_TIMEOUT
          value: "{{ .Values.job.activeDeadlineSeconds }}"
      command: ["/bin/sh", "-c"]
      args:
        - |
          set -e
          echo "Step 1: Waiting for the gather job to complete..."
          kubectl -n "$NAMESPACE" wait --for=condition=complete "job/$JOB_NAME" --timeout="${JOB_TIMEOUT}s"

          echo "Step 2: Waiting for the data retriever pod to be ready..."
          kubectl -n "$NAMESPACE" wait --for=condition=ready "pod/$DATA_RETRIEVER_POD" --timeout=60s

          echo "Step 3: Retrieving gathered data from the data retriever pod..."
          kubectl -n "$NAMESPACE" exec "$DATA_RETRIEVER_POD" -- tar czf - -C /data . > /tmp/rhdh-must-gather-output.tar.gz

          size=$(stat -c%s /tmp/rhdh-must-gather-output.tar.gz 2>/dev/null || stat -f%z /tmp/rhdh-must-gather-output.tar.gz 2>/dev/null)
          if [ "$size" -le 0 ] 2>/dev/null; then
            echo "FAIL: retrieved archive is empty"
            exit 1
          fi
          echo "PASS: gathered data retrieved successfully (${size} bytes)"
      resources:
📚 Focus areas based on broader codebase context

Consistency

This chart introduces test.enabled / test.image as the configuration key, while other charts in this repo use the plural tests.enabled pattern for Helm test pods. Consider aligning to the existing repo convention (or documenting the deviation clearly) to reduce surprise for users and keep automation/CT expectations consistent across charts. (Ref 4, Ref 5)

{{- if and .Values.test.enabled .Values.dataRetriever.enabled -}}
apiVersion: v1
kind: ServiceAccount

Reference reasoning: The existing charts define Helm test toggles under a tests top-level key in values.yaml and gate the test template rendering using that key. Reusing the same key structure in this new chart would match established repo conventions and reduce cross-chart inconsistencies for consumers.

📄 References
  1. redhat-developer/rhdh-chart/charts/orchestrator-infra/Chart.yaml [1-18]
  2. redhat-developer/rhdh-chart/charts/orchestrator-infra/crds/knative-serving/knative-serving-crd.yaml [2365-2374]
  3. redhat-developer/rhdh-chart/charts/orchestrator-infra/crds/knative-eventing/knative-eventing-crd.yaml [2262-2271]
  4. redhat-developer/rhdh-chart/charts/orchestrator-infra/values.yaml [37-41]
  5. redhat-developer/rhdh-chart/charts/orchestrator-infra/templates/tests/infra-test.yaml [0-2]
  6. redhat-developer/rhdh-chart/ct.yaml [1-10]
  7. redhat-developer/rhdh-chart/ct-install.yaml [1-11]
  8. redhat-developer/rhdh-chart/charts/orchestrator-software-templates-infra/Chart.yaml [1-17]

@rhdh-qodo-merge rhdh-qodo-merge bot added documentation Improvements or additions to documentation Tests labels Mar 12, 2026
@rhdh-qodo-merge
Copy link
Copy Markdown

rhdh-qodo-merge bot commented Mar 12, 2026

PR Code Suggestions ✨

Explore these optional code suggestions:

CategorySuggestion                                                                                                                                    Impact
Security
Conditionally grant permission to access secrets
Suggestion Impact:The commit removed "secrets" from the default core resources list and added a conditional rule granting "get" and "list" on secrets only when enabled (implemented as .Values.gather.withSecrets OR .Values.gather.withHelm).

code diff:

   - apiGroups: [""]
-    resources: ["pods", "pods/log", "services", "endpoints", "configmaps", "secrets", "events", "persistentvolumeclaims", "serviceaccounts", "namespaces", "nodes", "replicationcontrollers", "resourcequotas", "limitranges"]
+    resources: ["pods", "pods/log", "services", "endpoints", "configmaps", "events", "persistentvolumeclaims", "serviceaccounts", "replicationcontrollers", "resourcequotas", "limitranges"]
     verbs: ["get", "list"]
+  {{- if or .Values.gather.withSecrets .Values.gather.withHelm }}
+  - apiGroups: [""]
+    resources: ["secrets"]
+    verbs: ["get", "list"]
+  {{- end }}

To enhance security, make the get and list permissions for secrets conditional
on the .Values.gather.withSecrets flag being enabled.

charts/must-gather/templates/rbac.yaml [8-11]

 rules:
   - apiGroups: [""]
-    resources: ["pods", "pods/log", "services", "endpoints", "configmaps", "secrets", "events", "persistentvolumeclaims", "serviceaccounts", "namespaces", "nodes", "replicationcontrollers", "resourcequotas", "limitranges"]
+    resources: ["pods", "pods/log", "services", "endpoints", "configmaps", "events", "persistentvolumeclaims", "serviceaccounts", "namespaces", "nodes", "replicationcontrollers", "resourcequotas", "limitranges"]
     verbs: ["get", "list"]
+{{- if .Values.gather.withSecrets }}
+  - apiGroups: [""]
+    resources: ["secrets"]
+    verbs: ["get", "list"]
+{{- end }}

[Suggestion processed]

Suggestion importance[1-10]: 9

__

Why: This is a valid and critical security suggestion that correctly identifies an overly permissive ClusterRole and proposes restricting access to secrets based on a configuration flag, adhering to the principle of least privilege.

High
Conditionally grant pod exec permission

To enhance security, make the create permission for pods/exec conditional on the
.Values.gather.withHeapDumps flag being enabled.

charts/must-gather/templates/rbac.yaml [33-35]

-- apiGroups: [""]
-  resources: ["pods/exec"] # NOSONAR - exec is required to collect data from the running containers
-  verbs: ["create"]
+{{- if .Values.gather.withHeapDumps }}
+  - apiGroups: [""]
+    resources: ["pods/exec"] # NOSONAR - exec is required to collect data from the running containers
+    verbs: ["create"]
+{{- end }}
  • Apply / Chat
Suggestion importance[1-10]: 9

__

Why: This is a valid and critical security suggestion that correctly identifies the risk of unconditionally granting pods/exec permission and proposes restricting it based on a configuration flag, adhering to the principle of least privilege.

High
Possible issue
Remove problematic pod affinity rule
Suggestion Impact:The change removed the entire data-retriever pod manifest, which also eliminates the problematic podAffinity rule (and all other pod spec content).

code diff:

@@ -1,65 +1 @@
-{{- if .Values.dataRetriever.enabled -}}
-apiVersion: v1
-kind: Pod
-metadata:
-  name: {{ include "rhdh-must-gather.dataRetrieverName" . }}
-  labels:
-    {{- include "rhdh-must-gather.labels" . | nindent 4 }}
-    app.kubernetes.io/component: data-retriever
-spec:
-  automountServiceAccountToken: false
-  restartPolicy: Never
-  {{- with .Values.imagePullSecrets }}
-  imagePullSecrets:
-    {{- toYaml . | nindent 4 }}
-  {{- end }}
-  {{- with .Values.podSecurityContext }}
-  securityContext:
-    {{- toYaml . | nindent 4 }}
-  {{- end }}
-  affinity:
-    podAffinity:
-      requiredDuringSchedulingIgnoredDuringExecution:
-        - labelSelector:
-            matchLabels:
-              {{- include "rhdh-must-gather.selectorLabels" . | nindent 14 }}
-              app.kubernetes.io/component: gather
-          topologyKey: kubernetes.io/hostname
-  containers:
-    - name: data-retriever
-      {{- $drImage := deepCopy .Values.image }}
-      {{- if .Values.dataRetriever.image.registry }}{{ $_ := set $drImage "registry" .Values.dataRetriever.image.registry }}{{- end }}
-      {{- if .Values.dataRetriever.image.repository }}{{ $_ := set $drImage "repository" .Values.dataRetriever.image.repository }}{{- end }}
-      {{- if .Values.dataRetriever.image.tag }}{{ $_ := set $drImage "tag" .Values.dataRetriever.image.tag }}{{- end }}
-      {{- if .Values.dataRetriever.image.digest }}{{ $_ := set $drImage "digest" .Values.dataRetriever.image.digest }}{{- end }}
-      image: {{ include "rhdh-must-gather.image" (dict "image" $drImage "defaultTag" .Chart.AppVersion) | quote }} # NOSONAR
-      imagePullPolicy: {{ .Values.dataRetriever.image.pullPolicy | default .Values.image.pullPolicy | default "IfNotPresent" }}
-      {{- with .Values.securityContext }}
-      securityContext:
-        {{- toYaml . | nindent 8 }}
-      {{- end }}
-      command:
-        - sleep
-        - infinity
-      volumeMounts:
-        - name: must-gather-output
-          mountPath: /data
-          readOnly: true
-      {{- with .Values.dataRetriever.resources }}
-      resources:
-        {{- toYaml . | nindent 8 }}
-      {{- end }}
-  volumes:
-    - name: must-gather-output
-      persistentVolumeClaim:
-        claimName: {{ include "rhdh-must-gather.fullname" . }}-pvc
-  {{- with .Values.nodeSelector }}
-  nodeSelector:
-    {{- toYaml . | nindent 4 }}
-  {{- end }}
-  {{- with .Values.tolerations }}
-  tolerations:
-    {{- toYaml . | nindent 4 }}
-  {{- end }}
-{{- end }}

Remove the podAffinity rule from the data-retriever pod to prevent potential
scheduling failures after the gather job pod completes.

charts/must-gather/templates/data-retriever-pod.yaml [20-27]

+{{- with .Values.affinity }}
 affinity:
-  podAffinity:
-    requiredDuringSchedulingIgnoredDuringExecution:
-      - labelSelector:
-          matchLabels:
-            {{- include "rhdh-must-gather.selectorLabels" . | nindent 14 }}
-            app.kubernetes.io/component: gather
-        topologyKey: kubernetes.io/hostname
+  {{- toYaml . | nindent 4 }}
+{{- end }}

[Suggestion processed]

Suggestion importance[1-10]: 8

__

Why: The suggestion correctly identifies a potential scheduling failure for the data-retriever pod due to its affinity with a transient gather pod, which is a significant correctness issue.

Medium
Correct schema for job TTL setting
Suggestion Impact:The commit removed the entire `job` schema block, including `ttlSecondsAfterFinished`, rather than correcting its type/default as suggested.

code diff:

-        "job": {
-            "title": "Job configuration.",
-            "type": "object",
-            "additionalProperties": false,
-            "properties": {
-                "activeDeadlineSeconds": {
-                    "title": "Job timeout in seconds.",
-                    "type": "integer",
-                    "default": 3600
-                },
-                "backoffLimit": {
-                    "title": "Number of retries before marking job as failed.",
-                    "type": "integer",
-                    "default": 3
-                },
-                "ttlSecondsAfterFinished": {
-                    "title": "TTL for automatic cleanup after job finishes (seconds). Set to a positive value to enable automatic cleanup.",
-                    "type": [
-                        "integer",
-                        "string"
-                    ],
-                    "default": ""
-                }

Correct the schema for job.ttlSecondsAfterFinished by changing its type to
integer and setting the default value to 600 to prevent potential template
rendering errors.

charts/must-gather/values.schema.tmpl.json [122-129]

 "ttlSecondsAfterFinished": {
     "title": "TTL for automatic cleanup after job finishes (seconds). Set to a positive value to enable automatic cleanup.",
-    "type": [
-        "integer",
-        "string"
-    ],
-    "default": ""
+    "type": "integer",
+    "default": 600
 }

[Suggestion processed]

Suggestion importance[1-10]: 7

__

Why: This suggestion correctly identifies that the schema for ttlSecondsAfterFinished in the template file is incorrect, which could lead to deployment failures, and provides a fix that aligns it with the Kubernetes API and the chart's values.yaml.

Medium
General
Improve test validation for gathered data
Suggestion Impact:Added a tar file listing count check (file_count) and extended the failure condition/message to fail when the archive is empty or contains no files.

code diff:

           size=$(stat -c%s /tmp/rhdh-must-gather-output.tar.gz 2>/dev/null || stat -f%z /tmp/rhdh-must-gather-output.tar.gz 2>/dev/null)
-          if [ "$size" -le 0 ] 2>/dev/null; then
-            echo "FAIL: retrieved archive is empty"
+          file_count=$(tar -tzf /tmp/rhdh-must-gather-output.tar.gz | wc -l)
+          if [ "$size" -le 0 ] 2>/dev/null || [ "$file_count" -eq 0 ]; then
+            echo "FAIL: retrieved archive is empty or contains no files"
             exit 1

Improve the test validation by checking that the gathered data archive is not
empty and contains at least one file, instead of only checking its size.

charts/must-gather/templates/tests/test.yaml [110-114]

 size=$(stat -c%s /tmp/rhdh-must-gather-output.tar.gz 2>/dev/null || stat -f%z /tmp/rhdh-must-gather-output.tar.gz 2>/dev/null)
-if [ "$size" -le 0 ] 2>/dev/null; then
-  echo "FAIL: retrieved archive is empty"
+file_count=$(tar -tzf /tmp/rhdh-must-gather-output.tar.gz | wc -l)
+if [ "$size" -le 0 ] 2>/dev/null || [ "$file_count" -eq 0 ]; then
+  echo "FAIL: retrieved archive is empty or contains no files"
   exit 1
 fi

[Suggestion processed]

Suggestion importance[1-10]: 6

__

Why: The suggestion correctly points out that checking only the size of a .tar.gz file is insufficient and proposes a more robust check by counting the files within the archive, which improves test reliability.

Low
  • More

@rm3l
Copy link
Copy Markdown
Member Author

rm3l commented Mar 13, 2026

/agentic_review

@rhdh-qodo-merge
Copy link
Copy Markdown

rhdh-qodo-merge bot commented Mar 13, 2026

Code Review by Qodo

🐞 Bugs (2) 📘 Rule violations (0) 📎 Requirement gaps (1)

Grey Divider


Action required

1. README uses non-official repo 📎 Requirement gap ✓ Correctness
Description
The new must-gather chart installation instructions point users to
https://redhat-developer.github.io/rhdh-chart instead of the official charts.openshift.io
repository. This does not meet the requirement to publish/host the chart for customer consumption
from charts.openshift.io.
Code

charts/must-gather/README.md.gotmpl[R20-23]

+```console
+helm upgrade --install my-rhdh-must-gather rhdh-must-gather \
+  --repo https://redhat-developer.github.io/rhdh-chart \
+  --version {{ template "chart.version" . }}
Evidence
PR Compliance ID 2 requires the chart be consumable from the official charts.openshift.io
repository, but the added TL;DR install commands in the chart documentation use a GitHub Pages Helm
repo URL instead.

Publish/host the new must-gather Helm Chart in the official charts.openshift.io repository
charts/must-gather/README.md.gotmpl[20-23]
charts/must-gather/README.md[27-30]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
The must-gather chart documentation (generated and template) instructs users to install from `https://redhat-developer.github.io/rhdh-chart`, which conflicts with the compliance requirement to publish/host the chart in the official `charts.openshift.io` repository.

## Issue Context
PR Compliance ID 2 requires customers be able to pull/install the RHDH must-gather chart from `charts.openshift.io`. The current TL;DR sections instead reference a GitHub Pages chart repo URL.

## Fix Focus Areas
- charts/must-gather/README.md.gotmpl[20-23]
- charts/must-gather/README.md[27-30]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


2. Dynamic timestamp forces rollouts 🐞 Bug ⛯ Reliability
Description
The Deployment pod template annotation uses now, so rendering the chart produces a different
manifest each time even with identical inputs. This triggers unnecessary ReplicaSet rollouts and can
cause continuous churn in GitOps/helm-controller style reconciliation.
Code

charts/must-gather/templates/deployment.yaml[R26-27]

+      annotations:
+        rhdh-must-gather/run-timestamp: {{ now | date "2006-01-02T15:04:05Z" | quote }}
Evidence
The annotation value is computed at render-time from the current clock (now), which changes on
every render; because it is inside .spec.template.metadata.annotations, it changes the pod
template hash and forces a rollout.

charts/must-gather/templates/deployment.yaml[26-30]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
The Deployment’s pod template includes an annotation rendered with `now`, making the manifest non-deterministic and forcing a new ReplicaSet whenever the chart is rendered.

### Issue Context
Non-deterministic templates are especially problematic under GitOps or any controller that repeatedly renders Helm charts to detect drift.

### Fix Focus Areas
- charts/must-gather/templates/deployment.yaml[26-30]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


3. Helm default list bug🐞 Bug ✓ Correctness
Description
The deployment template initializes $effectiveNamespaces with default list, which (with the
default empty gather.namespaces) resolves to a non-array value and then gets passed to join,
breaking template rendering. This can prevent helm install/helm upgrade from succeeding with
default values.
Code

charts/must-gather/templates/deployment.yaml[R71-72]

+          {{- $nsScope := ne (.Values.rbac.scope | default "cluster") "cluster" }}
+          {{- $effectiveNamespaces := .Values.gather.namespaces | default list }}
Evidence
gather.namespaces defaults to an empty array, but the template uses default list (without
invoking list) and later unconditionally treats $effectiveNamespaces as a list by calling join
on it.

charts/must-gather/templates/deployment.yaml[71-72]
charts/must-gather/templates/deployment.yaml[109-112]
charts/must-gather/values.yaml[71-75]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`$effectiveNamespaces` is computed with `default list`, which does not produce an empty list value and later causes `join` to fail when rendering args.

## Issue Context
The chart’s default `gather.namespaces` is `[]`, so the `default` branch is taken in the current template.

## Fix Focus Areas
- charts/must-gather/templates/deployment.yaml[71-112]

## Suggested change
Replace:
- `{{- $effectiveNamespaces := .Values.gather.namespaces | default list }}`
With one of:
- `{{- $effectiveNamespaces := .Values.gather.namespaces | default (list) }}`
- or `{{- $effectiveNamespaces := (.Values.gather.namespaces | default (list)) }}`
- or simply `{{- $effectiveNamespaces := .Values.gather.namespaces }}` (since values/schema already default it to `[]`).

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools



Remediation recommended

4. Secrets access by default 🐞 Bug ⛨ Security
Description
With default values (gather.withHelm: true, gather.withSecrets: false), the chart still renders
RBAC rules granting get/list on Secrets, including cluster-wide when rbac.scope=cluster. This
unnecessarily expands blast radius if the must-gather pod/SA is compromised.
Code

charts/must-gather/templates/clusterrbac.yaml[R12-16]

+  {{- if or .Values.gather.withSecrets .Values.gather.withHelm }}
+  - apiGroups: [""]
+    resources: ["secrets"]
+    verbs: ["get", "list"]
+  {{- end }}
Evidence
The ClusterRole/Role templates grant Secrets read when either withSecrets OR withHelm is
enabled; since withHelm defaults to true, Secrets permissions are enabled by default even though
explicit Secrets collection is disabled by default.

charts/must-gather/templates/clusterrbac.yaml[12-16]
charts/must-gather/values.yaml[57-66]
charts/must-gather/templates/rbac.yaml[13-17]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
The chart currently grants `secrets` get/list permissions whenever `gather.withHelm` is enabled. Because `gather.withHelm` defaults to `true`, this makes Secrets read access enabled by default even when `gather.withSecrets` is `false`.

### Issue Context
This affects both cluster-scoped RBAC (ClusterRole) and namespace-scoped RBAC (Role). In the default configuration (`rbac.scope=cluster`), this results in cluster-wide Secrets read permissions.

### Fix Focus Areas
- charts/must-gather/templates/clusterrbac.yaml[12-16]
- charts/must-gather/templates/rbac.yaml[13-17]
- charts/must-gather/values.yaml[57-66]
- charts/must-gather/values.schema.tmpl.json[160-189]
- charts/must-gather/values.schema.json[re-generated to match tmpl]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


Grey Divider

ⓘ The new review experience is currently in Beta. Learn more

Grey Divider

Qodo Logo

@rm3l rm3l force-pushed the rhidp-12626-add-new-helm-chart-for-rhdh-must-gather-for-easier-consumption-against-supported-non-ocp-platforms branch from 6e01b02 to 22ae5bc Compare March 13, 2026 15:12
@rm3l rm3l force-pushed the rhidp-12626-add-new-helm-chart-for-rhdh-must-gather-for-easier-consumption-against-supported-non-ocp-platforms branch from 92338db to c856e27 Compare March 13, 2026 15:29
rm3l and others added 24 commits March 18, 2026 23:16
Co-authored-by: rhdh-qodo-merge[bot] <232573409+rhdh-qodo-merge[bot]@users.noreply.github.com>
The previous approach used to create issues with the PVC
automountServiceAccountToken: true mounts credentials into every
container in the pod, including the data-holder which only runs
"sleep infinity". Replace it with a projected volume carrying a
bound (time-limited) service account token, mounted exclusively
into the containers that actually call the Kubernetes API.

Assisted-by: Cursor
Made-with: Cursor
…ion is enabled

This is needed because 'helm list' uses a Secret storage backend by default, so it is needed to identify such Helm releases
…lt values enforced in values.yaml and the JSON schema file
Kubernetes requires the installing user to already hold any permission
they grant via a Role or ClusterRole. When deploying with namespace-
scoped RBAC on a cluster where CRDs like backstages or sonataflows are
not installed, the role creation fails because those permissions cannot
be escalated.

Rather than a single opaque toggle, expose per-API-group booleans under
rbac.rules so users can precisely disable only the rules they cannot
grant, while keeping the corresponding gather.with* collection flags
enabled — the gather script already handles missing permissions
gracefully at runtime.

Also removes a duplicate config.openshift.io/clusterversions rule from
the ClusterRole template.

Assisted-by: Cursor
Made-with: Cursor
@rm3l rm3l force-pushed the rhidp-12626-add-new-helm-chart-for-rhdh-must-gather-for-easier-consumption-against-supported-non-ocp-platforms branch from 58ebf1a to 0175764 Compare March 18, 2026 22:51
rm3l added 2 commits March 30, 2026 00:10
…-helm-chart-for-rhdh-must-gather-for-easier-consumption-against-supported-non-ocp-platforms
@sonarqubecloud
Copy link
Copy Markdown

Quality Gate Failed Quality Gate failed

Failed conditions
2 Security Hotspots

See analysis details on SonarQube Cloud

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

documentation Improvements or additions to documentation enhancement New feature or request Possible security concern Review effort 3/5 Tests

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant