-
Notifications
You must be signed in to change notification settings - Fork 1.5k
Description
A Kubernetes egress network policy shall allow access from a pod to the local network such that
the pod can connect to a server outside the k8s cluster. On one of our clusters, this policy is not effective. The allowed traffic is blocked. One such server is actually a load balancer managed by a (Hcloud) Cloud Controller Manager. Here, the Network Policy for the local network does not apply.
Expected Behavior
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test
namespace: netpoltest
spec:
podSelector:
matchLabels:
netpol: test
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: 10.40.0.0/16
ports:
- protocol: TCP
port: 443
A pod with label netpol:test should be allowed to connect to https://10.40.0.13:443 (the load balancer)
Current Behavior
With no netpols (allow everything), the connection works as expected. However, when selectively allowing the 10.40/16 subnet with the policy above, the connection times out.
Solution
The load balancer in question belongs to a service on the same cluster. Allow access to the pods. In our case.
egress:
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: ingress-nginx-internal
Steps to Reproduce (for bugs)
- Create a namespace
- In the namespace, create the policy quoted above
- Create a pod with a matching label and get an interactive shell
- Test with curl. (Use IP address to avoid DNS issues)
Context
The kubernetes cluster was created with kubeone 1.10.1 in a Hetzner cloud: https://docs.kubermatic.com/kubeone/v1.10/tutorials/creating-clusters/#default-configuration
Your Environment
- Calico version: v3.29.2 with flannel:v0.24.4 (actually canal)
- Calico dataplane (iptables, windows etc.)
- Orchestrator version: Kubernetes version 1.32.6
- Operating System and version: Ubuntu 24.04 LTS