Applying k8s network policies in Amazon EKS
I'm learning about Kubernetes network policies. I'm attempting to create a situation where two pods in the same namespace have different network policies associated:
- pod A has ingress from anywhere
- pod B has ingress from nowhere (but eventually, only pod A)
I'm finding that it appears as if Kubernetes is accepting the network policies, but not enforcing them. The deployed pods use the ealen/echo-server:latest
image to echo back information about the environment its running it, and to test the policies I make an HTTP request from one pod to another:
kubectl exec \
-n private-networking \
POD_A_NAME \
-- wget -O - service-b.private-networking
If the policies are working, I expect a call from A to B to fail with a timeout, and a call from B to A to succeed. Currently, they succeed both ways.
The cluster is deployed with Amazon EKS, and I'm not using Calico or anything (though you'll see in the github repo that I tried it).
The pods are deployed via a deployment object, and differ only in name. (n.b. the pods are not being deployed on Fargate)
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-a
namespace: private-networking
spec:
selector:
matchLabels:
service: service-a
template:
metadata:
labels:
service: service-a
spec:
containers:
- name: echo-a
image: ealen/echo-server:latest
resources:
limits:
memory: "128Mi"
cpu: "100m"
ports:
- containerPort: 8080
env:
- name: PORT
value: "8080"
The applied network policies are below, and accessible on GitHub too
What am I missing?
---
# Deny all ingress and egress traffic across the board
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all
namespace: private-networking
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
---
# Allow all pods in the namespace to egress traffic to kube-dns
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-dns
namespace: private-networking
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- namespaceSelector: {}
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- port: 53
protocol: UDP
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: service-a-ingress-from-anywhere
namespace: private-networking
spec:
podSelector:
matchLabels:
service: service-a
policyTypes:
- Ingress
ingress:
- from:
- ipBlock:
cidr: 0.0.0.0/0
ports:
- port: 8080
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: service-a-egress-to-anywhere
namespace: private-networking
spec:
podSelector:
matchLabels:
service: service-a
egress:
- {}
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: service-b-ingress-from-nowhere
namespace: private-networking
spec:
podSelector:
matchLabels:
service: service-b
policyTypes:
- Ingress
ingress: [ ]
The answer to this question turned out to be install Calico on the Amazon EKS cluster. I had misunderstood the documentation, believing Calico was an optional extra and that Amazon EKS clusters had a Container Networking Interface plugin installed by default.
It appears they don't.