Kubernetes Network Policies: A Complete Security Guide
Learn how to implement Kubernetes network policies to secure pod-to-pod communication. Includes practical examples, common patterns, and troubleshooting tips.
CKA Certified Engineer
TL;DR
Network policies in Kubernetes act as firewalls for pods. Default is allow-all. Use labels to select pods and define ingress/egress rules.
Key Takeaways
- 1 Kubernetes allows all traffic by default - network policies add restrictions
- 2 Use namespaceSelector and podSelector to target specific workloads
- 3 Always start with a deny-all policy then explicitly allow needed traffic
- 4 Database isolation is critical - only backend pods should reach your database
- 5 Test policies in staging and use kubectl describe netpol to debug
Network policies are essential for securing Kubernetes clusters in production. In this guide, we’ll explore how to implement effective network policies to control pod-to-pod communication.
Why Network Policies Matter
By default, Kubernetes allows unrestricted communication between all pods in a cluster. While this simplifies development, it creates significant security risks in production environments. Network policies act as firewalls for your pods, controlling which pods can communicate with each other.
Prerequisites
Before implementing network policies, ensure you have:
- A Kubernetes cluster with a CNI that supports network policies (Calico, Cilium, Weave Net). For local development, check our MicroK8s setup guide - you can enable Calico with
microk8s enable calico. - kubectl configured to access your cluster
- Basic understanding of Kubernetes pods and namespaces (see our Pods beginner guide if you’re new to Kubernetes)
Understanding Network Policy Basics
A network policy consists of three main components:
- Pod Selector - Which pods the policy applies to
- Policy Types - Ingress (incoming), Egress (outgoing), or both
- Rules - Allowed sources (ingress) or destinations (egress)
Creating Your First Network Policy
Let’s start with a simple deny-all policy:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: production
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
This policy selects all pods in the production namespace and blocks all traffic. From here, you can add specific allow rules.
Allowing Specific Traffic
Here’s how to allow traffic from frontend pods to backend pods:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-frontend-to-backend
namespace: production
spec:
podSelector:
matchLabels:
app: backend
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
ports:
- protocol: TCP
port: 8080
Common Patterns
Allow DNS Resolution
Most applications need DNS access. Here’s how to allow it:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-dns
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: kube-system
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- protocol: UDP
port: 53
Namespace Isolation
Isolate namespaces from each other while allowing internal communication:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-same-namespace
spec:
podSelector: {}
policyTypes:
- Ingress
ingress:
- from:
- podSelector: {}
Database Isolation (Most Common Pattern)
Protecting your database is critical. In most breaches, attackers move laterally through the network to reach sensitive data. This pattern ensures only authorized services can connect to your database.
The scenario: You have a PostgreSQL database that should only be accessible from your backend API pods. Nothing else - not the frontend, not other services, not compromised pods - should reach it.
Step 1: Lock Down the Database
First, deny all traffic to database pods:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: database-deny-all
namespace: production
spec:
podSelector:
matchLabels:
app: postgresql
tier: database
policyTypes:
- Ingress
- Egress
Step 2: Allow Backend Access
Now explicitly allow your backend API to connect:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: database-allow-backend
namespace: production
spec:
podSelector:
matchLabels:
app: postgresql
tier: database
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: backend-api
tier: backend
ports:
- protocol: TCP
port: 5432
Step 3: Allow Database Egress (If Needed)
If your database needs to reach external services (replication, backups), add specific egress rules:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: database-allow-egress
namespace: production
spec:
podSelector:
matchLabels:
app: postgresql
tier: database
policyTypes:
- Egress
egress:
# Allow DNS resolution
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: kube-system
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- protocol: UDP
port: 53
# Allow backup to S3 (HTTPS)
- to:
- ipBlock:
cidr: 0.0.0.0/0
ports:
- protocol: TCP
port: 443
Complete Three-Tier Application Example
Here’s how these policies work together for a typical web application:
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ Frontend │────▶│ Backend │────▶│ Database │
│ (nginx) │ │ (api) │ │ (postgresql)│
└─────────────┘ └─────────────┘ └─────────────┘
:80 :8080 :5432
The full policy set:
---
# 1. Default deny for all pods in production
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: production
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
---
# 2. Allow DNS for all pods
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-dns
namespace: production
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: kube-system
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- protocol: UDP
port: 53
---
# 3. Frontend: allow ingress from internet, egress to backend
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: frontend-policy
namespace: production
spec:
podSelector:
matchLabels:
tier: frontend
policyTypes:
- Ingress
- Egress
ingress:
- from: [] # Allow from anywhere (internet via ingress controller)
ports:
- protocol: TCP
port: 80
egress:
- to:
- podSelector:
matchLabels:
tier: backend
ports:
- protocol: TCP
port: 8080
---
# 4. Backend: allow ingress from frontend, egress to database
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: backend-policy
namespace: production
spec:
podSelector:
matchLabels:
tier: backend
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
tier: frontend
ports:
- protocol: TCP
port: 8080
egress:
- to:
- podSelector:
matchLabels:
tier: database
ports:
- protocol: TCP
port: 5432
---
# 5. Database: only allow ingress from backend
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: database-policy
namespace: production
spec:
podSelector:
matchLabels:
tier: database
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
tier: backend
ports:
- protocol: TCP
port: 5432
Testing Your Database Isolation
Verify the policies work correctly:
# This should SUCCEED (backend to database)
kubectl exec -n production deploy/backend-api -- \
nc -zv postgresql-service 5432
# This should FAIL (frontend to database)
kubectl exec -n production deploy/frontend -- \
nc -zv postgresql-service 5432
# This should FAIL (random pod to database)
kubectl run test-pod --rm -it --image=busybox -n production -- \
nc -zv postgresql-service 5432
Debugging Network Policies
When things don’t work as expected:
- Verify CNI support: Not all CNIs support network policies
- Check policy syntax: Use
kubectl describe netpol <name> - Test connectivity: Use
kubectl execto run curl or ping - Review labels: Ensure pod labels match selectors
Best Practices
- Start restrictive: Begin with deny-all, then add specific allows
- Document policies: Use annotations to explain each policy’s purpose
- Test in staging: Always validate policies before production
- Monitor traffic: Use tools like Cilium Hubble for visibility
- Version control: Store policies in Git alongside application code
Conclusion
Network policies are a fundamental security control for Kubernetes clusters. By implementing a deny-by-default approach and carefully allowing necessary traffic, you can significantly reduce your attack surface.
Want to master Kubernetes security? Our CKS (Certified Kubernetes Security Specialist) course covers network policies in depth, along with pod security, supply chain security, and runtime threat detection. For teams just getting started with Kubernetes, check out our Kubernetes Basics course.
At Fraway, we help organizations implement secure Kubernetes infrastructure. Our certified engineers can assist with network policy design, implementation, and troubleshooting. Contact us to learn more about our Kubernetes consulting services.
Written by
Francesco Donzello
CKA Certified Engineer