Conversation
WalkthroughAdds two new Ginkgo/Gomega e2e test suites that validate NetworkPolicy objects, reconciliation, and runtime connectivity/enforcement for service-ca and service-ca-operator; also moves Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 5
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@test/e2e/network_policy_enforcement.go`:
- Around line 101-124: The testServiceCAOperatorNetworkPolicyEnforcement
function runs connectivity assertions against openshift-service-ca-operator
without waiting for the operator to settle; call
waitForClusterOperatorAvailableNotProgressingNotDegraded for the service-ca
cluster operator (before any expectConnectivity or createServerPod usage) to
ensure the operator is Available=True, Progressing=False, Degraded=False; place
the wait immediately after obtaining kubeClient/namespace and before creating
the test pod and running the three traffic checks so the assertions are not
flaky.
- Around line 117-119: The test is currently asserting “deny” by hitting ports
where netexecPod isn’t listening (so failures could be just closed ports);
update the test and server to use a second known-open listener and validate
filtering by actually having netexecPod listen on both ports (e.g., 8443 and an
auxiliary port like 12345) and then use expectConnectivity to assert that only
8443 is reachable from openshift-monitoring while the auxiliary port is denied;
change setup where netexecPod is instantiated to pass/start the extra
--http-port and update the expectConnectivity calls (references: netexecPod,
expectConnectivity, serverIP and the port lists) to test reachability against
both real listeners so policy denial is proven rather than “nothing listening.”
In `@test/e2e/network_policy.go`:
- Around line 265-278: restoreNetworkPolicy currently treats the first
successful Get() after Delete() as "restored", which can be the same
pre-deletion object or a broken recreation; change it to capture the original
NetworkPolicy's UID and Spec before calling Delete() and then PollImmediate
until you observe a NetworkPolicy with a different UID AND whose Spec equals the
original Spec (use a deep equality check) before returning success; reference
restoreNetworkPolicy and mutateAndRestoreNetworkPolicy when implementing this
check and ensure you continue polling on NotFound and other transient errors
rather than returning early.
- Around line 35-36: The test `testServiceCANetworkPolicyReconcile()`
mutates/deletes live NetworkPolicy objects and must not run in parallel; update
the test registration that calls `testServiceCANetworkPolicyReconcile()` (the
g.It(...) declaration) to mark it as serial/disruptive per suite conventions
(e.g., add "[Serial]" and/or "[Disruptive]" labels in the test description or
use the framework's `g.Serial` wrapper) so it runs in isolation, and apply the
same change to the other related specs that run
`testServiceCANetworkPolicyReconcile()` (the block covering the other tests
around the same function call).
- Around line 139-152: The requireDefaultDenyAll function currently only checks
for an empty PodSelector and presence of both PolicyTypes, but does not ensure
there are no allow rules; update requireDefaultDenyAll to also assert that
policy.Spec.Ingress and policy.Spec.Egress are empty slices (length == 0) and
g.Fail with a descriptive message including policy.Namespace, policy.Name and
the offending rules if either is non-empty so any policy containing ingress or
egress rules is rejected.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 763a2f3e-3826-4adc-9ed7-56c5d7bc6b04
📒 Files selected for processing (2)
test/e2e/network_policy.gotest/e2e/network_policy_enforcement.go
| g.It("[Operator][NetworkPolicy] should restore service-ca NetworkPolicies after delete or mutation[Timeout: 30m]", func() { | ||
| testServiceCANetworkPolicyReconcile() |
There was a problem hiding this comment.
Run the reconcile spec in isolation.
This spec deletes and mutates live NetworkPolicy objects in openshift-service-ca*. In a parallel e2e run, that can perturb unrelated specs while the operator is reconciling and cause cross-test flakes. Please make it serial/disruptive per the suite’s conventions, or otherwise isolate it from parallel specs.
Also applies to: 88-130
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/e2e/network_policy.go` around lines 35 - 36, The test
`testServiceCANetworkPolicyReconcile()` mutates/deletes live NetworkPolicy
objects and must not run in parallel; update the test registration that calls
`testServiceCANetworkPolicyReconcile()` (the g.It(...) declaration) to mark it
as serial/disruptive per suite conventions (e.g., add "[Serial]" and/or
"[Disruptive]" labels in the test description or use the framework's `g.Serial`
wrapper) so it runs in isolation, and apply the same change to the other related
specs that run `testServiceCANetworkPolicyReconcile()` (the block covering the
other tests around the same function call).
| func requireDefaultDenyAll(policy *networkingv1.NetworkPolicy) { | ||
| g.GinkgoHelper() | ||
| if len(policy.Spec.PodSelector.MatchLabels) != 0 || len(policy.Spec.PodSelector.MatchExpressions) != 0 { | ||
| g.Fail(fmt.Sprintf("%s/%s: expected empty podSelector", policy.Namespace, policy.Name)) | ||
| } | ||
|
|
||
| policyTypes := sets.NewString() | ||
| for _, policyType := range policy.Spec.PolicyTypes { | ||
| policyTypes.Insert(string(policyType)) | ||
| } | ||
| if !policyTypes.Has(string(networkingv1.PolicyTypeIngress)) || !policyTypes.Has(string(networkingv1.PolicyTypeEgress)) { | ||
| g.Fail(fmt.Sprintf("%s/%s: expected both Ingress and Egress policyTypes, got %v", policy.Namespace, policy.Name, policy.Spec.PolicyTypes)) | ||
| } | ||
| } |
There was a problem hiding this comment.
requireDefaultDenyAll should reject policies with allow rules.
An empty pod selector plus both policy types is not sufficient: a policy with any ingress or egress rules would still pass here and would no longer be default-deny. Assert that both rule lists are empty as well.
Proposed fix
func requireDefaultDenyAll(policy *networkingv1.NetworkPolicy) {
g.GinkgoHelper()
if len(policy.Spec.PodSelector.MatchLabels) != 0 || len(policy.Spec.PodSelector.MatchExpressions) != 0 {
g.Fail(fmt.Sprintf("%s/%s: expected empty podSelector", policy.Namespace, policy.Name))
}
+ if len(policy.Spec.Ingress) != 0 || len(policy.Spec.Egress) != 0 {
+ g.Fail(fmt.Sprintf(
+ "%s/%s: expected no ingress/egress rules for default-deny, got ingress=%d egress=%d",
+ policy.Namespace,
+ policy.Name,
+ len(policy.Spec.Ingress),
+ len(policy.Spec.Egress),
+ ))
+ }
policyTypes := sets.NewString()
for _, policyType := range policy.Spec.PolicyTypes {
policyTypes.Insert(string(policyType))
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| func requireDefaultDenyAll(policy *networkingv1.NetworkPolicy) { | |
| g.GinkgoHelper() | |
| if len(policy.Spec.PodSelector.MatchLabels) != 0 || len(policy.Spec.PodSelector.MatchExpressions) != 0 { | |
| g.Fail(fmt.Sprintf("%s/%s: expected empty podSelector", policy.Namespace, policy.Name)) | |
| } | |
| policyTypes := sets.NewString() | |
| for _, policyType := range policy.Spec.PolicyTypes { | |
| policyTypes.Insert(string(policyType)) | |
| } | |
| if !policyTypes.Has(string(networkingv1.PolicyTypeIngress)) || !policyTypes.Has(string(networkingv1.PolicyTypeEgress)) { | |
| g.Fail(fmt.Sprintf("%s/%s: expected both Ingress and Egress policyTypes, got %v", policy.Namespace, policy.Name, policy.Spec.PolicyTypes)) | |
| } | |
| } | |
| func requireDefaultDenyAll(policy *networkingv1.NetworkPolicy) { | |
| g.GinkgoHelper() | |
| if len(policy.Spec.PodSelector.MatchLabels) != 0 || len(policy.Spec.PodSelector.MatchExpressions) != 0 { | |
| g.Fail(fmt.Sprintf("%s/%s: expected empty podSelector", policy.Namespace, policy.Name)) | |
| } | |
| if len(policy.Spec.Ingress) != 0 || len(policy.Spec.Egress) != 0 { | |
| g.Fail(fmt.Sprintf( | |
| "%s/%s: expected no ingress/egress rules for default-deny, got ingress=%d egress=%d", | |
| policy.Namespace, | |
| policy.Name, | |
| len(policy.Spec.Ingress), | |
| len(policy.Spec.Egress), | |
| )) | |
| } | |
| policyTypes := sets.NewString() | |
| for _, policyType := range policy.Spec.PolicyTypes { | |
| policyTypes.Insert(string(policyType)) | |
| } | |
| if !policyTypes.Has(string(networkingv1.PolicyTypeIngress)) || !policyTypes.Has(string(networkingv1.PolicyTypeEgress)) { | |
| g.Fail(fmt.Sprintf("%s/%s: expected both Ingress and Egress policyTypes, got %v", policy.Namespace, policy.Name, policy.Spec.PolicyTypes)) | |
| } | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/e2e/network_policy.go` around lines 139 - 152, The requireDefaultDenyAll
function currently only checks for an empty PodSelector and presence of both
PolicyTypes, but does not ensure there are no allow rules; update
requireDefaultDenyAll to also assert that policy.Spec.Ingress and
policy.Spec.Egress are empty slices (length == 0) and g.Fail with a descriptive
message including policy.Namespace, policy.Name and the offending rules if
either is non-empty so any policy containing ingress or egress rules is
rejected.
| func restoreNetworkPolicy(ctx context.Context, client kubernetes.Interface, namespace, name string) { | ||
| g.GinkgoHelper() | ||
| g.GinkgoWriter.Printf("deleting NetworkPolicy %s/%s\n", namespace, name) | ||
| o.Expect(client.NetworkingV1().NetworkPolicies(namespace).Delete(ctx, name, metav1.DeleteOptions{})).NotTo(o.HaveOccurred()) | ||
| err := wait.PollImmediate(5*time.Second, 10*time.Minute, func() (bool, error) { | ||
| _, err := client.NetworkingV1().NetworkPolicies(namespace).Get(ctx, name, metav1.GetOptions{}) | ||
| if err != nil { | ||
| return false, nil | ||
| } | ||
| return true, nil | ||
| }) | ||
| o.Expect(err).NotTo(o.HaveOccurred(), "timed out waiting for NetworkPolicy %s/%s to be restored", namespace, name) | ||
| g.GinkgoWriter.Printf("NetworkPolicy %s/%s restored\n", namespace, name) | ||
| } |
There was a problem hiding this comment.
restoreNetworkPolicy can pass before any real reconciliation happened.
Delete() is asynchronous, and this helper treats the first successful Get() as “restored”. That can still be the original object while deletion is in flight. It also never checks UID/spec, so a broken recreation can pass here and then become the baseline for mutateAndRestoreNetworkPolicy. Wait for a different UID with the original Spec before declaring success.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/e2e/network_policy.go` around lines 265 - 278, restoreNetworkPolicy
currently treats the first successful Get() after Delete() as "restored", which
can be the same pre-deletion object or a broken recreation; change it to capture
the original NetworkPolicy's UID and Spec before calling Delete() and then
PollImmediate until you observe a NetworkPolicy with a different UID AND whose
Spec equals the original Spec (use a deep equality check) before returning
success; reference restoreNetworkPolicy and mutateAndRestoreNetworkPolicy when
implementing this check and ensure you continue polling on NotFound and other
transient errors rather than returning early.
9a0622e to
7f4aed6
Compare
There was a problem hiding this comment.
♻️ Duplicate comments (2)
test/e2e/network_policy_enforcement.go (2)
203-235:⚠️ Potential issue | 🟠 MajorThe “wrong port denied” checks are still ambiguous.
This helper only starts one listener, so the failures on other ports can come from “nothing is listening,” not from NetworkPolicy enforcement. That makes the deny assertions at Lines 117-119, 159-167, and 198-200 non-proving. Use a second intentionally-open listener (another container or pod is fine) and make that the denied-port control.
Verification: this script prints the local helper and the negative port assertions, then fetches upstream
agnhostnetexecsource and shows thehttp-portflag registration. Expected result: one--http-portin this file and a single-valuehttp-portflag upstream.#!/bin/bash set -euo pipefail echo "Server helper used by the service-ca port checks:" sed -n '203,235p' test/e2e/network_policy_enforcement.go echo echo "Negative port assertions that depend on that helper:" sed -n '117,119p;159,167p;198,200p' test/e2e/network_policy_enforcement.go echo echo "Upstream agnhost netexec flag definition:" for ref in v1.35.1 release-1.35 master main; do url="https://raw.githubusercontent.com/kubernetes/kubernetes/${ref}/test/images/agnhost/netexec/netexec.go" if content="$(curl -fsSL "$url" 2>/dev/null)"; then printf 'ref=%s\n' "$ref" printf '%s\n' "$content" | rg -n -C2 'http-port|IntVar' exit 0 fi done echo "Could not fetch agnhost netexec source from the expected upstream refs." >&2 exit 1🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/e2e/network_policy_enforcement.go` around lines 203 - 235, The netexecPod helper currently only exposes a single listener (netexec --http-port) so asserts that a different port is denied are ambiguous; modify netexecPod to add a second intentionally-open listener (either a second container in the same Pod or a separate Pod created by a new helper) that listens on a known control port, then change the negative-port assertions to probe the control listener as the "allowed" baseline and assert that the control port is reachable while the targeted denied port is rejected by NetworkPolicy; reference netexecPod (and the new control container/pod name and control port constant) when updating the tests that perform the deny checks so they explicitly use the control listener as the reachable comparator.
101-110:⚠️ Potential issue | 🟠 MajorWait for the
service-caClusterOperator to settle before these probes run.All four service-ca-specific helpers start creating probe pods and asserting traffic immediately. Because
expectConnectivityaccepts the first matching probe, running while the operator is still reconciling can make these results timing-dependent. Gate the checks onAvailable=True,Progressing=False, andDegraded=FalsebeforecreateServerPod.Also applies to: 126-131, 143-148, 170-175
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/e2e/network_policy_enforcement.go` around lines 101 - 110, Before creating probe pods in testServiceCAOperatorNetworkPolicyEnforcement(), wait for the "service-ca" ClusterOperator to reach Available=True, Progressing=False, Degraded=False; add a blocking check (e.g., reuse or add a helper like waitForClusterOperatorConditions or poll via the ClusterOperator API) immediately before the call to createServerPod so the subsequent helpers (createServerPod, expectConnectivity and other service-ca-specific probes) run only after the operator has settled; apply the same waiting logic to the other probe sections referenced (the similar createServerPod / probe calls around the other ranges) so each probe is gated on service-ca's stable conditions.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@test/e2e/network_policy_enforcement.go`:
- Around line 203-235: The netexecPod helper currently only exposes a single
listener (netexec --http-port) so asserts that a different port is denied are
ambiguous; modify netexecPod to add a second intentionally-open listener (either
a second container in the same Pod or a separate Pod created by a new helper)
that listens on a known control port, then change the negative-port assertions
to probe the control listener as the "allowed" baseline and assert that the
control port is reachable while the targeted denied port is rejected by
NetworkPolicy; reference netexecPod (and the new control container/pod name and
control port constant) when updating the tests that perform the deny checks so
they explicitly use the control listener as the reachable comparator.
- Around line 101-110: Before creating probe pods in
testServiceCAOperatorNetworkPolicyEnforcement(), wait for the "service-ca"
ClusterOperator to reach Available=True, Progressing=False, Degraded=False; add
a blocking check (e.g., reuse or add a helper like
waitForClusterOperatorConditions or poll via the ClusterOperator API)
immediately before the call to createServerPod so the subsequent helpers
(createServerPod, expectConnectivity and other service-ca-specific probes) run
only after the operator has settled; apply the same waiting logic to the other
probe sections referenced (the similar createServerPod / probe calls around the
other ranges) so each probe is gated on service-ca's stable conditions.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 31e8470c-d000-4c86-b63e-eb97eba3a1eb
📒 Files selected for processing (2)
test/e2e/network_policy.gotest/e2e/network_policy_enforcement.go
✅ Files skipped from review due to trivial changes (1)
- test/e2e/network_policy.go
7f4aed6 to
cdcd741
Compare
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: kaleemsiddiqu The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
There was a problem hiding this comment.
♻️ Duplicate comments (2)
test/e2e/network_policy_enforcement.go (2)
101-124:⚠️ Potential issue | 🟠 MajorGate the live policy checks on
service-careadiness.These specs create pods in
openshift-service-ca-operatorand immediately assert enforcement, but they never wait for theservice-caClusterOperator to settle first. During rollout or reconcile, that makes the results timing-dependent. Reuse the same availability/not-progressing/not-degraded gate before eachcreateServerPodpath here.Also applies to: 126-141, 143-201
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/e2e/network_policy_enforcement.go` around lines 101 - 124, The test testServiceCAOperatorNetworkPolicyEnforcement runs network policy assertions immediately after creating test pods; before each createServerPod call (and similarly in the other affected blocks), add a readiness gate that waits for the service-ca ClusterOperator to be Available, not Progressing, and not Degraded (reuse the existing availability/not-progressing/not-degraded check used elsewhere in the suite) so the operator has finished rollout; ensure this readiness check is executed right before calling createServerPod and before any expectConnectivity assertions that depend on service-ca being settled.
117-119:⚠️ Potential issue | 🟠 MajorThe non-8443 deny cases still only prove “no listener.”
At Line 227 the server starts exactly one HTTP listener. The checks at Line 118, Line 160, Line 164, and Line 199 therefore fail even if ingress is fully open, because nothing is serving on those ports. To validate port filtering, expose one additional known-open port from the server and assert that 8443 stays allowed while that extra listener is denied.
Also applies to: 159-167, 198-200, 203-230
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/e2e/network_policy_enforcement.go` around lines 117 - 119, The deny checks are only probing ports with no listener, so add a second known-open listener on the test server and update the connectivity assertions to differentiate allowed 8443 from a denied extra port: modify the server startup (where serverIP is assigned / the server is launched) to bind an additional HTTP listener on a fixed test port (e.g., extraOpenPort), then update expectConnectivity calls (the expectConnectivity function invocations that use serverIP and port numbers) so that assertions verify 8443 is allowed and extraOpenPort is denied for the namespaces under test (replace the current non-8443 ports like 12345 with the new extraOpenPort in the deny checks).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@test/e2e/network_policy_enforcement.go`:
- Around line 101-124: The test testServiceCAOperatorNetworkPolicyEnforcement
runs network policy assertions immediately after creating test pods; before each
createServerPod call (and similarly in the other affected blocks), add a
readiness gate that waits for the service-ca ClusterOperator to be Available,
not Progressing, and not Degraded (reuse the existing
availability/not-progressing/not-degraded check used elsewhere in the suite) so
the operator has finished rollout; ensure this readiness check is executed right
before calling createServerPod and before any expectConnectivity assertions that
depend on service-ca being settled.
- Around line 117-119: The deny checks are only probing ports with no listener,
so add a second known-open listener on the test server and update the
connectivity assertions to differentiate allowed 8443 from a denied extra port:
modify the server startup (where serverIP is assigned / the server is launched)
to bind an additional HTTP listener on a fixed test port (e.g., extraOpenPort),
then update expectConnectivity calls (the expectConnectivity function
invocations that use serverIP and port numbers) so that assertions verify 8443
is allowed and extraOpenPort is denied for the namespaces under test (replace
the current non-8443 ports like 12345 with the new extraOpenPort in the deny
checks).
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: f4e6bf39-3023-4ad6-9ffa-c8a208375986
📒 Files selected for processing (3)
go.modtest/e2e/network_policy.gotest/e2e/network_policy_enforcement.go
✅ Files skipped from review due to trivial changes (1)
- test/e2e/network_policy.go
cdcd741 to
6480ea8
Compare
gomega moved to direct dependency as used in network_policy tests now moved serviceCAOperatorNamespace to network_policy.go Signed-off-by: Kaleemullah Siddiqui <ksiddiqu@redhat.com>
6480ea8 to
3f3ec94
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (2)
test/e2e/network_policy_enforcement.go (2)
101-110:⚠️ Potential issue | 🟠 MajorWait for the
service-caClusterOperator to settle before these probes.These four specs target live policy in
openshift-service-ca-operatorimmediately. If the operator is still reconciling, the results are timing-dependent. Add the existingAvailable=True / Progressing=False / Degraded=Falsewait beforecreateServerPod, or do it once in shared setup for this block.Also applies to: 126-131, 147-148, 174-175
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/e2e/network_policy_enforcement.go` around lines 101 - 110, In testServiceCAOperatorNetworkPolicyEnforcement, before calling createServerPod (and similarly before the other probe blocks referenced), wait for the service-ca ClusterOperator to be settled by polling its conditions until Available=True, Progressing=False, Degraded=False; insert a call to the existing ClusterOperator wait utility (or implement a short poll using getKubeClientAndConfig to fetch ClusterOperator status) immediately after retrieving the kube client and namespace and before createServerPod so the operator reconciliation has finished prior to running the network-policy probes.
203-229:⚠️ Potential issue | 🟠 MajorThe "other port denied" checks can still false-pass.
netexecPodstarts exactly one listener from the singleportargument, so connections to other ports fail even with fully open ingress. That makes the assertions on Line 119, Line 161, Line 166, and Line 200 validate "nothing is listening" instead of port filtering. Use a second known-open port as the deny control.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/e2e/network_policy_enforcement.go` around lines 203 - 229, netexecPod currently only starts one listener (netexecPod), so tests that assert "other port denied" can false-pass because nothing listens on the control port; change netexecPod to start a second known-open port and expose it in the pod spec: update the function signature to accept a second port (e.g., add altPort int32 or accept ports []int32), add the second container port to Containers[].Ports and pass both listener args to the agnhost netexec command (so it binds both ports), and update all call sites in the tests to use that second port as the deny-control port for assertions that expect "other port is open/closed." Ensure the unique symbol netexecPod, the container Name "netexec" and agnhostImage are updated accordingly so the tests validate actual port filtering rather than "nothing listening."
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@test/e2e/network_policy_enforcement.go`:
- Around line 155-157: The test uses "openshift-etcd" as the source namespace
for negative checks which is brittle; replace it with a dedicated test source
namespace (e.g., "bad-client" or "deny-client") so the only variable is the
source label/policy under test. Update the two calls to expectConnectivity that
currently pass "openshift-etcd" (the one at g.By(...) / expecting deny on 8443
and the similar block at lines ~182-184) to use the new dedicated namespace
string, and create/ensure that namespace in the test setup/teardown so the
negative control is isolated; refer to the expectConnectivity(...) invocations
and any namespace creation helper used in the test to implement this change.
---
Duplicate comments:
In `@test/e2e/network_policy_enforcement.go`:
- Around line 101-110: In testServiceCAOperatorNetworkPolicyEnforcement, before
calling createServerPod (and similarly before the other probe blocks
referenced), wait for the service-ca ClusterOperator to be settled by polling
its conditions until Available=True, Progressing=False, Degraded=False; insert a
call to the existing ClusterOperator wait utility (or implement a short poll
using getKubeClientAndConfig to fetch ClusterOperator status) immediately after
retrieving the kube client and namespace and before createServerPod so the
operator reconciliation has finished prior to running the network-policy probes.
- Around line 203-229: netexecPod currently only starts one listener
(netexecPod), so tests that assert "other port denied" can false-pass because
nothing listens on the control port; change netexecPod to start a second
known-open port and expose it in the pod spec: update the function signature to
accept a second port (e.g., add altPort int32 or accept ports []int32), add the
second container port to Containers[].Ports and pass both listener args to the
agnhost netexec command (so it binds both ports), and update all call sites in
the tests to use that second port as the deny-control port for assertions that
expect "other port is open/closed." Ensure the unique symbol netexecPod, the
container Name "netexec" and agnhostImage are updated accordingly so the tests
validate actual port filtering rather than "nothing listening."
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 2437d87c-7571-473a-8d97-240b3087f0a2
📒 Files selected for processing (4)
go.modtest/e2e/e2e_test.gotest/e2e/network_policy.gotest/e2e/network_policy_enforcement.go
💤 Files with no reviewable changes (1)
- test/e2e/e2e_test.go
✅ Files skipped from review due to trivial changes (2)
- go.mod
- test/e2e/network_policy.go
| g.By("Testing metrics port 8443 from openshift-etcd with custom app label: should be denied") | ||
| g.GinkgoWriter.Printf("expecting deny from openshift-etcd with custom label to %s:8443 (custom app labels not allowed in etcd namespace)\n", svcCAOperatorIP) | ||
| expectConnectivity(kubeClient, "openshift-etcd", map[string]string{"test": "client"}, svcCAOperatorIP, 8443, false) |
There was a problem hiding this comment.
openshift-etcd is a brittle negative control for 8443.
Here the destination pod and port are the same as the surrounding allow cases; the only variable is the source namespace. A failure can therefore come from unrelated egress/admission controls in openshift-etcd, not from the service-ca-operator ingress policy under test. Use a dedicated source namespace where the destination policy is the only variable.
Also applies to: 182-184
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/e2e/network_policy_enforcement.go` around lines 155 - 157, The test uses
"openshift-etcd" as the source namespace for negative checks which is brittle;
replace it with a dedicated test source namespace (e.g., "bad-client" or
"deny-client") so the only variable is the source label/policy under test.
Update the two calls to expectConnectivity that currently pass "openshift-etcd"
(the one at g.By(...) / expecting deny on 8443 and the similar block at lines
~182-184) to use the new dedicated namespace string, and create/ensure that
namespace in the test setup/teardown so the negative control is isolated; refer
to the expectConnectivity(...) invocations and any namespace creation helper
used in the test to implement this change.
|
/retest |
|
@kaleemsiddiqu: all tests passed! Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
Network policy tests for added network policy for service-ca-operator component