Conversation
There was a problem hiding this comment.
3 issues found across 10 files
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="internal/controller/operator/factory/vmagent/vmagent.go">
<violation number="1" location="internal/controller/operator/factory/vmagent/vmagent.go:151">
P1: This adds an HPA, but vmagent reconciliation still overwrites `.spec.replicas`, so HPA scaling will be reverted on the next sync.</violation>
<violation number="2" location="internal/controller/operator/factory/vmagent/vmagent.go:1322">
P1: HPA targets the base vmagent name, which breaks sharded vmagent because the real Deployment/StatefulSet names are shard-suffixed.</violation>
</file>
<file name="config/crd/overlay/crd.descriptionless.yaml">
<violation number="1" location="config/crd/overlay/crd.descriptionless.yaml:4683">
P1: Require `hpa.maxReplicas` in this schema. As added, users can omit it, and the controller will build an invalid HPA with `maxReplicas: 0`.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
3c3cf20 to
d4bf72f
Compare
There was a problem hiding this comment.
6 issues found across 17 files (changes from recent commits).
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="docs/CHANGELOG.md">
<violation number="1" location="docs/CHANGELOG.md:27">
P1: Custom agent: **Changelog Review Agent**
This changelog entry is missing the required user-centric before/now/impact explanation, so it does not meet the mandatory changelog structure.</violation>
<violation number="2" location="docs/CHANGELOG.md:27">
P2: This changelog entry describes a different feature than the HPA changes in this PR, so the release notes become misleading.</violation>
</file>
<file name="api/operator/v1alpha1/vmdistributed_types.go">
<violation number="1" location="api/operator/v1alpha1/vmdistributed_types.go:204">
P2: Validate `statefulRollingUpdateStrategyBehavior` for `VMDistributed` zone agents before reconciling. Right now invalid `maxUnavailable` values are admitted and only surface later as reconcile errors.</violation>
</file>
<file name="api/operator/v1beta1/vmagent_types.go">
<violation number="1" location="api/operator/v1beta1/vmagent_types.go:103">
P2: Validate `statefulRollingUpdateStrategyBehavior` when this field is set. Right now invalid or empty values are admitted and only surface as StatefulSet reconcile failures.</violation>
</file>
<file name="config/crd/overlay/crd.yaml">
<violation number="1">
P2: Require `maxUnavailable` in this new object, otherwise `statefulRollingUpdateStrategyBehavior: {}` is accepted by the CRD and later fails reconciliation with a nil `IntOrString`.</violation>
</file>
<file name="internal/controller/operator/factory/reconcile/statefulset.go">
<violation number="1" location="internal/controller/operator/factory/reconcile/statefulset.go:108">
P1: This refactor drops the built-in protection for HPA-managed StatefulSets. Callers without a custom `Patch` will overwrite the HPA's current replica count on the next reconcile.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
b9a74a6 to
7630af5
Compare
7630af5 to
3f5f01f
Compare
| ) | ||
|
|
||
| type DeploymentOpts struct { | ||
| PatchSpec func(existingSpec, newSpec *appsv1.DeploymentSpec) |
There was a problem hiding this comment.
Lets add PatchMeta to get rid of owner *metav1.OwnerReference too?
There was a problem hiding this comment.
not sure we need this, as owner is only used in mergeMeta, which then should become public and propagated to all places where reconcile.Deployment is invoked, maybe makes sense to add owner to opts struct to simplify signature, but other reconcile functions have it
There was a problem hiding this comment.
ah, okay, lets keep it as is then
internal/controller/operator/factory/reconcile/statefulset_test.go
Outdated
Show resolved
Hide resolved
| } | ||
| } | ||
| if err := reconcile.Deployment(ctx, rclient, lbDep, prevLB, false, &owner); err != nil { | ||
| if err := reconcile.Deployment(ctx, rclient, lbDep, prevLB, &owner, nil); err != nil { |
There was a problem hiding this comment.
HPA for vmauth replicas maybe? I'm fine with a follow-up PR too
There was a problem hiding this comment.
currently there's no HPA in VM/VL/VTCluster
3f5f01f to
b0c77ff
Compare
b0c77ff to
d9d6b70
Compare
|
@vrutkovs |
fixes #1961
fixes #1101
Summary by cubic
Adds HPA support for
VMAgentand customizable StatefulSet rolling updates forVMAgentandVMDistributedZoneAgent. The operator now reconcilesHorizontalPodAutoscalerforVMAgent, preserves HPA-controlled replicas on updates, aligns stateful HPA withshardCount, and removes HPA on delete; fixes #1961 and #1101.New Features
spec.hpa(EmbeddedHPA) inVMAgentSpecandVMDistributedZoneAgentSpec; in stateful mode, HPA replicas map tospec.shardCount. Validation blockshpawithdaemonSetMode. HPA is created/updated and cleaned up onVMAgentdelete.spec.statefulRollingUpdateStrategyBehaviorforVMAgentandVMDistributedZoneAgent(applies withOnDelete). Rolling-update behavior is respected across StatefulSets.VMAuth,VLCluster(insert/select/storage),VTCluster(insert/select/storage), andVMCluster.Refactors
shardCounttoint32across APIs, controllers, and tests (VMAgent,VMAnomaly); updated sharding helpers.STSOptions→StatefulSetOpts; addedDeploymentOptswith aPatchSpechook to retain replicas when HPA controls scaling.Written for commit d9d6b70. Summary will update on new commits.