Container networking is hard?

By Published On: September 16, 2016

I felt like I heard two completely different pitches surrounding container networking from the two companies best positioned to discuss the topic. Both Google and Docker presented on container networking at the VMware-sponsored FutureNET event. FutureNET is a future of networking summit. It was nearly impossible to tell what company sponsored the event. We heard a ton about OpenStack and SDN with little reference to either vSphere or NSX. Let me share the highlights of the Google talk and then discuss the takeaways from the Docker discussion. 

Each talk was only 20 minutes, so it was very much like drinking from the firehose without any time to brace yourself for the impact. Google started out by mentioning Kubernetes in passing. The focus of the talk was to discuss their pain points. Google is actively looking for smart startups to solve these problems. 

Every thing breaks at scale

To summarize the issues, it’s all about scale. We’ve heard the value proposition of data center orchestration solutions such as Kubernetes and Docker Swarm. Operating containers at scale in production needs an ecosystem. Much of that ecosystem has focused on application development and workload orchestration. However, networking has languished from an attention perspective. VMware has spoken about how they have integrated NSX into container networking to bring traditional network tools to the container world. Hearing Google’s talk, it’s not as simple of taking the old paradigm and applying it to containers. 

Network provisioning, operations and de-provisioning each present unique challenges to the network infrastructure. The speed and lifespan of containers is the issue. Let’s just say you wanted to take a VMware-like approach to containers. Each container receives an IP address and some namespace entry. The namespace entry could be a DNS entry (unlikely) or registration in some services DB in Kubernetes or Docker Swarm. Containers spin up in microseconds opposed to minutes or seconds that it takes VMs or bare metal. The containers perform their intended operation and then spend down. Let’s take Lambda functions as an example. Lambda functions last no longer than 5 seconds. If the work can’t be performed in 5 seconds, the container dies. 

Just assigning IP addresses becomes a challenge as IP collisions take place as requests are simultaneous. The same problem occurs with namespace provisioning. Remember that as a troubleshooting tool all of this telemetry data needs logging. The sheer amount of telemetry generating from provisioning, and de-provisioning is daunting. 

Google has found some practical challenges to managing containers at 1000’s of containers per rack. An example of the actual problem is network forwarding table or CAM. 200 concurrent or more containers may exist on a single host in a rack. Multiply that by 40 nodes and you have a severe forwarding table challenge if the requirement that any container talks to any container. 

Google wasn’t offering ideas for solutions to these difficulties. First, they only had 20 minutes to deliver the presentation. Secondly, they were appealing to the audience to create a startup that resolves these issues so that Google could become a customer. 

Docker’s Take

Interestingly enough, Docker followed with their presentation. Docker’s Madhu Venugopal presented the Docker take on container networking. Interesting enough, Madhu is a co-founder of Socketplane. Socketplane was acquired by Docker shortly after Madhu appeared on The CloudCast. Socketplane’s approach was very much like VMware’s approach to container networking. Socketplane used the built in Linux vSwitch to provide the mapping to the real world. Docker has since modified the architecture to fit its plugin approach. 

Docker didn’t answer any of Google’s question. It’s obvious that Docker is a company focused on developers opposed to infrastructure. Madhu talked about the abstraction problem vs. the actual data plan issues associated with container networking. Madhu pointed to the simplicity of mapping the container network concepts of load balancing and namespace to the traditional network. I walked away with the feeling I was giving a hand wave and not to pay any attention to the man behind the curtain. Those problems were for the Cisco’s and VMware’s of the world. In Madhu’s words, networking people make container networking hard. 

Madhu may very well be right. I’m simply not smart enough in networking and containers to dispute his assertion. I do know that Google thinks it’s a hard problem. But every problem for Google is hard when you talk at webscale proportions. 

For a bonus back last October I recorded a podcast with now Google Engineer Kelsey Hightower and Greg Ferro discussing container networking. 

 

Share This Story, Choose Your Platform!

RelatedArticles