title: “IPAM for Kubernetes and Container Orchestration: Managing Pod IP Addresses at Scale”
slug: “ipam-for-kubernetes-container-orchestration-pod-ip-management”
url: “/ipam-for-kubernetes-container-orchestration-pod-ip-management”
date: “2026-04-28”
author: “Mike Walton”
keywords:
– “kubernetes IPAM”
– “pod IP address management”
– “container orchestration networking”
– “CNI IPAM plugins”
– “kubernetes IP exhaustion”
tags:
– “IPAM”
– “Kubernetes”
– “Containers”
– “DevOps”
– “Network Management”
status: “draft”
IPAM for Kubernetes and Container Orchestration: Managing Pod IP Addresses at Scale
By Mike Walton, Founder of CertMS
*With 20+ years managing enterprise IT infrastructure, I’ve watched containers go from experimental curiosity to production backbone. The networking challenges that come with them? Those caught a lot of organizations off guard. Here’s what I’ve learned about keeping your IP addresses under control when pods spin up faster than anyone can track.*
Your deployment pipeline created 47 new pods overnight. Each one grabbed an IP address from your pod CIDR. By morning, your monitoring dashboard shows subnet utilization at 94%. Someone tries to scale up a critical service and gets a cryptic error: “failed to allocate for range 0: no IP addresses available.”
This isn’t a hypothetical nightmare. It’s what happens when traditional IPAM thinking meets container reality.
According to Datadog’s container report, nearly half of organizations running containers now use Kubernetes for orchestration. Serverless container adoption jumped to 46% from 31% just two years earlier. That growth means more pods, more IP addresses consumed, and more organizations discovering their network planning didn’t account for this level of dynamism.
Why Kubernetes Changes Everything About IP Management
Traditional servers stick around. You assign an IP, document it, and that assignment might stay valid for years. Kubernetes plays by completely different rules.
A single node in a production cluster might run 110 pods by default—each with its own IP address. Scale that across 50 nodes and you’re looking at 5,500 IP addresses just for pods. Add services, load balancers, and ingress resources, and the numbers climb further.
Cilium’s documentation notes that “various IPAM modes are supported to meet the needs of different users.” Calico, Flannel, Weave, and AWS VPC CNI all bring their own approaches to allocating addresses.
This creates a fragmentation problem. Your VMware environment has its IPAM. Your physical network has its IPAM. And now Kubernetes has yet another system assigning addresses with its own rules, pools, and constraints. Without coordination, these systems will eventually conflict.
Tigera’s Calico documentation explains that Calico’s IPAM “provides additional IP allocation efficiency and flexibility compared to other address management approaches.” Features like per-namespace IP pools let you assign separate address ranges to different teams or applications. That’s powerful—but only if someone documents which pools exist and tracks their utilization.
The Stranded IP Problem
Here’s a subtle issue that compounds over time: stranded IP allocations.
The Whereabouts IPAM project on GitHub acknowledges that “a hard system crash on a node might leave behind stranded IP allocations, so if you have a thrashing system, this might exhaust IPs.” When a node dies unexpectedly, the IP addresses its pods held don’t always get released cleanly. Over weeks and months, these orphaned allocations eat into your available pool.
Traditional IPAM tools won’t see this happening. The CNI plugin thinks those addresses are allocated. Your external IPAM thinks those addresses belong to Kubernetes. Nobody reconciles the difference until someone runs out of space.
IP Exhaustion: The Silent Cluster Killer
IP exhaustion doesn’t announce itself dramatically. It creeps up through failed pod schedules, stuck deployments, and mysterious “ContainerCreating” states.
AWS’s container blog describes a common scenario: “While [VPC-native pod networking] makes Pods first-class citizens within the VPC network, it often leads to exhaustion of the limited number of IPv4 addresses available in the VPCs.”
Organizations using RFC 1918 private ranges often face this earlier than expected. A /16 network sounds huge until you realize Kubernetes clusters consume IP space aggressively. The Isovalent blog on Cilium puts it plainly: exhaustion happens when “all IPv4 addresses in the node’s CIDR block (or the cluster’s IP pool) have been allocated.”
The solutions aren’t always straightforward:
Prefix delegation allocates blocks of addresses (/28 prefixes) instead of individual IPs, improving efficiency but requiring CNI support.
Custom networking assigns pod IPs from secondary CIDR ranges, preserving your primary VPC space for other uses.
IPv6 adoption theoretically solves the problem entirely, but as AWS notes, “many customers aren’t ready to make these types of decisions at the organization level.”
Each solution requires planning, testing, and documentation that extends beyond Kubernetes itself.
Bridging the Gap: Kubernetes and Traditional IPAM
The real challenge isn’t managing IPs within Kubernetes—the CNI plugins handle that. The challenge is maintaining visibility across your entire infrastructure.
Your network team needs to know which address ranges Kubernetes claims. Your capacity planning needs to account for cluster growth. Your security team needs to understand which IPs belong to which workloads for firewall rules and compliance.
Enterprise networking research from AWS identifies this as a primary pain point: “Platform teams must now manage IPAM and pod networking, routing (VXLAN, IPIP overlays etc.), service IP allocation and advertisements, and integrations with existing network security controls.”
This is where traditional IPAM tools still matter—even in container-native environments.
Documenting Your Kubernetes Network Space
At minimum, your IPAM should track:
- Pod CIDRs for each cluster, clearly marked as Kubernetes-managed
- Service CIDRs (the ClusterIP range) reserved for Kubernetes internal use
- Node IP ranges where your Kubernetes worker nodes live
- Load balancer pools if you’re using MetalLB or similar
- Reserved space for cluster expansion
- Physical servers for specific workloads
- VMware or Hyper-V for virtual machines
- One or more Kubernetes clusters for containerized applications
- Cloud instances that might or might not be Kubernetes-managed
- Pod networks (often 10.x.x.x ranges)
- Service networks (smaller, internal-only)
- Node networks (may share with general infrastructure or have dedicated space)
- Warning at 70% utilization gives you time to plan
- Critical at 85% demands immediate attention
- Emergency at 95% means failed deployments are imminent
- Separate clusters for different teams or applications
- Regional clusters for global deployments
- Development and staging clusters alongside production
- Document the address ranges being claimed
- Update your central IPAM system
- Verify no conflicts exist with existing allocations
- Plan for expansion needs
- Datadog: 10 Insights on Real-World Container Use
- Cilium Documentation: IP Address Management (IPAM)
- Tigera: Calico IPAM Explained and Enhanced
- AWS: Addressing IPv4 Address Exhaustion in Amazon EKS Clusters
- Isovalent: Overcoming Kubernetes IP Address Exhaustion with Cilium
- AWS: Navigating Enterprise Networking Challenges with Amazon EKS Auto Mode
- GitHub: Whereabouts CNI IPAM Plugin
This doesn’t require deep integration with Kubernetes APIs. It requires deliberate documentation and reserved address blocks.
When you’re planning subnet structures across environments, Kubernetes clusters need their own allocations that won’t conflict with dev, staging, or production traditional infrastructure.
The Hybrid Reality
Most organizations don’t run pure Kubernetes environments. They have:
Each layer needs IP addresses. Each layer has its own management tools. Without a unified view, conflicts are inevitable.
IPAM for virtualized environments already taught us this lesson: you can’t manage dynamic infrastructure with static documentation. Kubernetes amplifies the problem by an order of magnitude.
Best Practices for Kubernetes IP Planning
Here’s what actually works based on hard-won experience.
Size Your CIDRs Generously
The instinct to conserve IP space leads to under-provisioned clusters. A /20 pod CIDR feels generous until your cluster grows.
Google Cloud recommends planning for growth: “Proper IP address planning is crucial for the successful deployment and operation of your… cluster. Without careful planning, you may encounter issues such as IP exhaustion, routing conflicts, and difficulties in scaling.”
Build in headroom. If you expect 100 nodes eventually, plan for 200. The cost of unused IP space is far lower than the cost of cluster migration.
Segregate Kubernetes Traffic
Don’t let Kubernetes grab addresses from your general infrastructure pool. Dedicated CIDRs for:
This segregation makes conflicts obvious and capacity planning cleaner. When detecting rogue devices, you’ll know immediately if something shows up in a range that should only contain Kubernetes pods.
Monitor Utilization Proactively
CNI plugins typically expose metrics about IP pool utilization. Export these to your monitoring system and set alerts well before exhaustion:
The IPAM metrics that matter for traditional networks apply here too—just at container scale.
Plan for Multi-Cluster Scenarios
Production environments increasingly run multiple Kubernetes clusters:
Each cluster needs non-overlapping address space. Document which CIDRs belong to which cluster, and reserve expansion room for each.
Multi-site IP management principles apply: consistent addressing schemes, clear documentation, and centralized visibility.
Integrate with Change Management
When someone creates a new cluster, that’s a significant IP allocation event. It should go through whatever change process governs your network:
IPAM change tracking matters just as much for cluster provisioning as for individual host assignments.
When Traditional IPAM Meets Kubernetes
Your existing IPAM tool probably won’t manage pod IPs directly. The CNI plugin owns that. But your IPAM absolutely should:
Reserve and document Kubernetes ranges. Mark those CIDRs as allocated to Kubernetes with notes about which cluster uses them.
Track node IP assignments. Kubernetes worker nodes are just servers from a network perspective. Their IPs belong in your standard inventory.
Monitor border addresses. Load balancer IPs, ingress addresses, and external service endpoints bridge Kubernetes and your broader network.
Plan capacity holistically. Kubernetes growth consumes IP space that might otherwise go to VM expansion or new physical infrastructure.
Subnet24’s hierarchical group structure handles this well. Create a group for Kubernetes infrastructure, nest clusters underneath, and document the CIDR allocations for each. When you need to know what address space Kubernetes claims, everything’s organized in one view.
The real-time collaboration features matter here too. When your platform team provisions a new cluster, everyone sees the updated IP allocations immediately—no waiting for spreadsheet updates or documentation lag.
The Path Forward: Containers and Traditional Infrastructure Together
Kubernetes isn’t replacing your entire network. It’s adding a layer that moves faster than anything before it.
Success means:
Acknowledging dual systems. CNI plugins manage container IPs. Traditional IPAM manages everything else and tracks the boundaries.
Planning for scale. Container adoption accelerates. The clusters you deploy this year will be larger next year.
Maintaining visibility. You can’t secure, troubleshoot, or plan what you can’t see. Central documentation of all IP allocations—Kubernetes and otherwise—remains essential.
Automating where possible. Manual tracking can’t keep pace with container velocity. Use IPAM tools that update in real time and integrate with your provisioning workflows.
The organizations that struggle are the ones treating Kubernetes networking as separate from their broader IP strategy. The ones that succeed treat it as another workload type needing address space—just one that moves much faster.
Don’t Let Your Clusters Outrun Your Documentation
Kubernetes will keep creating pods. Your CI/CD pipeline will keep deploying. The question is whether your IP address management keeps pace or falls behind.
Subnet24 won’t manage your pod IPs—that’s what CNI plugins do. But it will help you document which address ranges Kubernetes claims, track node assignments alongside your traditional infrastructure, and maintain the visibility you need for capacity planning and troubleshooting.
Real-time updates mean your team sees allocation changes instantly. Unlimited nested groups let you organize clusters, environments, and infrastructure types however makes sense. Cloud access means checking IP status whether you’re in the data center or responding to a 2 AM page from home.
Start your free trial—up to 4 /24 subnets with no credit card required—and bring your Kubernetes IP planning into the same system as the rest of your network.
Mike Walton is the founder of CertMS, a certificate management platform. He has 20+ years of experience in IT infrastructure and PKI management.
Sources:
Word Count: 2,418