EKS networking configuration including VPC CNI, load balancers, and network policies...
Comprehensive guide for configuring Amazon EKS networking including VPC CNI plugin, load balancers, network policies, and security.
EKS networking involves several key components working together:
# Update VPC CNI addon with prefix delegation
aws eks update-addon \
--cluster-name my-cluster \
--addon-name vpc-cni \
--addon-version v1.19.2-eksbuild.1 \
--configuration-values '{
"env": {
"ENABLE_PREFIX_DELEGATION": "true",
"WARM_PREFIX_TARGET": "1"
}
}'
# Verify configuration
kubectl get daemonset -n kube-system aws-node -o yaml | grep ENABLE_PREFIX_DELEGATION
# Create IAM policy
curl -O https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.11.0/docs/install/iam_policy.json
aws iam create-policy \
--policy-name AWSLoadBalancerControllerIAMPolicy \
--policy-document file://iam_policy.json
# Create IRSA
eksctl create iamserviceaccount \
--cluster=my-cluster \
--namespace=kube-system \
--name=aws-load-balancer-controller \
--attach-policy-arn=arn:aws:iam::ACCOUNT_ID:policy/AWSLoadBalancerControllerIAMPolicy \
--approve
# Install via Helm
helm repo add eks https://aws.github.io/eks-charts
helm repo update
helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
-n kube-system \
--set clusterName=my-cluster \
--set serviceAccount.create=false \
--set serviceAccount.name=aws-load-balancer-controller
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-ingress
annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/group.name: shared-alb
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS": 443}]'
alb.ingress.kubernetes.io/ssl-redirect: '443'
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:region:account:certificate/xxx
spec:
ingressClassName: alb
rules:
- host: app.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: app-service
port:
number: 80
# VPC CNI v1.14+ supports network policies natively
kubectl set env daemonset -n kube-system aws-node ENABLE_NETWORK_POLICY=true
# Apply a network policy
kubectl apply -f - <<EOF
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: api-network-policy
namespace: default
spec:
podSelector:
matchLabels:
app: api
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
ports:
- protocol: TCP
port: 8080
egress:
- to:
- podSelector:
matchLabels:
app: database
ports:
- protocol: TCP
port: 5432
- to:
- namespaceSelector: {}
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- protocol: UDP
port: 53
EOF
1.1 Calculate IP Requirements
# Formula: IPs per node = (max-pods × 2) + ENIs + buffer
# Example for m5.large:
# - Max pods: 29
# - ENIs: 3
# - IPs needed: (29 × 2) + 3 + 5 = 66 IPs per node
# For a 10-node cluster: 660 IPs minimum
# Recommended subnet: /24 (254 IPs) or /23 (510 IPs)
1.2 Tag Subnets for EKS
# Public subnets (for internet-facing ALB/NLB)
aws ec2 create-tags \
--resources subnet-xxx \
--tags Key=kubernetes.io/role/elb,Value=1
# Private subnets (for internal ALB/NLB and worker nodes)
aws ec2 create-tags \
--resources subnet-yyy \
--tags Key=kubernetes.io/role/internal-elb,Value=1
# Cluster-specific tags (required)
aws ec2 create-tags \
--resources subnet-xxx subnet-yyy \
--tags Key=kubernetes.io/cluster/my-cluster,Value=shared
2.1 Choose Configuration Mode
# Option 1: Standard Mode (default)
# - One IP per pod
# - Limited by ENI capacity
# Option 2: Prefix Delegation Mode (recommended for high pod density)
kubectl set env daemonset -n kube-system aws-node ENABLE_PREFIX_DELEGATION=true
kubectl set env daemonset -n kube-system aws-node WARM_PREFIX_TARGET=1
# Option 3: IPv6 (recommended for IP exhaustion issues)
# - Virtually unlimited IPs
# - Must create IPv6-enabled cluster
# Option 4: Custom Networking (for secondary CIDR)
kubectl set env daemonset -n kube-system aws-node AWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFG=true
kubectl set env daemonset -n kube-system aws-node ENI_CONFIG_LABEL_DEF=topology.kubernetes.io/zone
2.2 Configure IP Management
# Warm pool settings (reduce pod startup time)
kubectl set env daemonset -n kube-system aws-node WARM_IP_TARGET=5
kubectl set env daemonset -n kube-system aws-node MINIMUM_IP_TARGET=10
# Maximum IPs per node
kubectl set env daemonset -n kube-system aws-node MAX_ENI=3
# Monitor IP usage
kubectl apply -f https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/master/config/master/cni-metrics-helper.yaml
3.1 Configure IngressClass
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
name: alb
spec:
controller: ingress.k8s.aws/alb
---
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
name: nginx
spec:
controller: k8s.io/ingress-nginx
3.2 Create Shared ALB with IngressGroups
# App 1
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app1-ingress
annotations:
alb.ingress.kubernetes.io/group.name: shared-alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/healthcheck-path: /health
spec:
ingressClassName: alb
rules:
- host: app1.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: app1-service
port:
number: 80
---
# App 2 (shares same ALB)
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app2-ingress
annotations:
alb.ingress.kubernetes.io/group.name: shared-alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
spec:
ingressClassName: alb
rules:
- host: app2.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: app2-service
port:
number: 80
3.3 Create NLB for TCP Services
apiVersion: v1
kind: Service
metadata:
name: tcp-service
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: external
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
service.beta.kubernetes.io/aws-load-balancer-healthcheck-protocol: TCP
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
spec:
type: LoadBalancer
selector:
app: tcp-app
ports:
- port: 3306
targetPort: 3306
protocol: TCP
4.1 Enable Network Policies
# For VPC CNI v1.14+
kubectl set env daemonset -n kube-system aws-node ENABLE_NETWORK_POLICY=true
# OR install Calico for enhanced policies
kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.28.0/manifests/calico-policy-only.yaml
4.2 Implement Security Groups for Pods
# Enable SGP in VPC CNI
kubectl set env daemonset -n kube-system aws-node ENABLE_POD_ENI=true
# Create security group
aws ec2 create-security-group \
--group-name pod-security-group \
--description "Security group for pods" \
--vpc-id vpc-xxx
# Create SecurityGroupPolicy
kubectl apply -f - <<EOF
apiVersion: vpcresources.k8s.aws/v1beta1
kind: SecurityGroupPolicy
metadata:
name: database-pods-sg
namespace: default
spec:
podSelector:
matchLabels:
app: database
securityGroups:
groupIds:
- sg-xxx
EOF
4.3 Default Deny Network Policy
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: production
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
5.1 Configure CoreDNS
# Scale CoreDNS for large clusters
kubectl scale deployment coredns -n kube-system --replicas=3
# Custom DNS forwarding
kubectl edit configmap coredns -n kube-system
# Add custom forwarding rules in Corefile
5.2 Install ExternalDNS
# Create IAM policy for Route 53
cat > external-dns-policy.json <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"route53:ChangeResourceRecordSets"
],
"Resource": "arn:aws:route53:::hostedzone/*"
},
{
"Effect": "Allow",
"Action": [
"route53:ListHostedZones",
"route53:ListResourceRecordSets"
],
"Resource": "*"
}
]
}
EOF
# Create IRSA
eksctl create iamserviceaccount \
--cluster=my-cluster \
--namespace=kube-system \
--name=external-dns \
--attach-policy-arn=arn:aws:iam::ACCOUNT_ID:policy/ExternalDNSPolicy \
--approve
# Deploy ExternalDNS
helm repo add external-dns https://kubernetes-sigs.github.io/external-dns/
helm install external-dns external-dns/external-dns \
-n kube-system \
--set serviceAccount.create=false \
--set serviceAccount.name=external-dns \
--set provider=aws \
--set policy=sync \
--set txtOwnerId=my-cluster
6.1 Reduce Cross-AZ Traffic
# Use topology-aware routing
apiVersion: v1
kind: Service
metadata:
name: app-service
annotations:
service.kubernetes.io/topology-mode: Auto
spec:
internalTrafficPolicy: Local
selector:
app: myapp
ports:
- port: 80
targetPort: 8080
6.2 Deploy VPC Endpoints
# Create VPC endpoints for AWS services
aws ec2 create-vpc-endpoint \
--vpc-id vpc-xxx \
--service-name com.amazonaws.us-west-2.ecr.api \
--vpc-endpoint-type Interface \
--subnet-ids subnet-xxx subnet-yyy \
--security-group-ids sg-zzz
# Common endpoints to create:
# - com.amazonaws.region.ecr.api
# - com.amazonaws.region.ecr.dkr
# - com.amazonaws.region.s3
# - com.amazonaws.region.logs
# - com.amazonaws.region.sts
6.3 Monitor Network Metrics
# Deploy CNI metrics helper
kubectl apply -f https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/master/config/master/cni-metrics-helper.yaml
# View metrics
kubectl top nodes
kubectl get --raw /apis/metrics.k8s.io/v1beta1/nodes
# View CNI metrics
kubectl get daemonset aws-node -n kube-system -o yaml | grep -A 10 WARM
# Check CloudWatch metrics
aws cloudwatch get-metric-statistics \
--namespace AWS/EKS \
--metric-name cluster_ip_allocation \
--dimensions Name=ClusterName,Value=my-cluster \
--statistics Average \
--start-time 2025-01-01T00:00:00Z \
--end-time 2025-01-27T00:00:00Z \
--period 3600
# Check pod IP assignment
kubectl get pods -o wide
# Describe pod networking
kubectl describe pod <pod-name> | grep IP
# Check CNI logs
kubectl logs -n kube-system -l k8s-app=aws-node --tail=50
# Verify ENI attachments
aws ec2 describe-network-interfaces \
--filters "Name=attachment.instance-id,Values=i-xxx"
# Check ingress status
kubectl get ingress -A
kubectl describe ingress <ingress-name>
# Check ALB controller logs
kubectl logs -n kube-system deployment/aws-load-balancer-controller
# Verify target groups
aws elbv2 describe-target-groups
aws elbv2 describe-target-health --target-group-arn <arn>
# Create test pod
kubectl run test-pod --image=nicolaka/netshoot -it --rm -- /bin/bash
# Test connectivity
curl http://service-name:port
nc -zv service-name port
# Verify policy applied
kubectl get networkpolicy
kubectl describe networkpolicy <policy-name>
For detailed information, see:
references/vpc-cni.md - CNI plugin configuration, modes, and optimizationreferences/load-balancers.md - ALB, NLB, and AWS Load Balancer Controllerreferences/network-policies.md - Network policies, security groups, and segmentationip for best performanceSymptoms: Pod stuck in ContainerCreating
Check:
# View CNI logs
kubectl logs -n kube-system -l k8s-app=aws-node
# Check available IPs
kubectl get nodes -o jsonpath='{.items[*].status.allocatable.pods}'
# Verify ENI limits not reached
aws ec2 describe-instances --instance-ids i-xxx
Solutions:
Symptoms: No load balancer provisioned for Ingress
Check:
# Verify controller running
kubectl get pods -n kube-system | grep aws-load-balancer-controller
# Check controller logs
kubectl logs -n kube-system deployment/aws-load-balancer-controller
# Verify subnet tags
aws ec2 describe-subnets --subnet-ids subnet-xxx
Solutions:
Symptoms: Traffic not blocked as expected
Check:
# Verify network policy enabled
kubectl get daemonset -n kube-system aws-node -o yaml | grep ENABLE_NETWORK_POLICY
# Check policy applied
kubectl get networkpolicy -A
kubectl describe networkpolicy <name>
Solutions:
Symptoms: Pods can't resolve service names
Check:
# Test DNS from pod
kubectl run -it --rm debug --image=busybox --restart=Never -- nslookup kubernetes.default
# Check CoreDNS status
kubectl get pods -n kube-system -l k8s-app=kube-dns
# View CoreDNS logs
kubectl logs -n kube-system -l k8s-app=kube-dns
Solutions: