Which networking shall we use? We can choose any of the following networkings:
kubenet
CNI
classic
external
The classic Kubernetes native networking is deprecated in favor of kubenet, so we can discard it right away.
The external networking is used in some custom implementations and for particular use cases, so we’ll discard that one as well.
That leaves us with kubenet and CNI.
Container Network Interface (CNI) allows us to plug in a third-party networking driver. Kops supports Calico, flannel, Canal (Flannel + Calico), kopeio-vxlan, kube-router, romana, weave, and amazon-vpc-routed-eni networks. Each of those networks comes with pros and cons and differs in its implementation and primary objectives. Choosing between them would require a detailed analysis of each. We’ll leave a comparison of all those for some other time and place. Instead, we’ll focus on kubenet.
Kubenet is kops’ default networking solution. It is Kubernetes native networking, and it is considered battle tested and very reliable. However, it comes with a limitation. On AWS, routes for each node are configured in AWS VPC routing tables. Since those tables cannot have more than fifty entries, kubenet can be used in clusters with up to fifty nodes. If you’re planning to have a cluster bigger than that, you’ll have to switch to one of the previously mentioned CNIs.
Use kubenet networking if your cluster is smaller than fifty nodes.
The good news is that using any of the networking solutions is easy. All we have to do is specify the --networking argument followed with the name of the network.
Given that we won’t have the time and space to evaluate all the CNIs, we’ll use kubenet as the networking solution for the cluster we’re about to create.