kube-proxy enables networking on Kubernetes nodes, with network rules that allow communication between pods and entities outside the Kubernetes cluster. kube-proxy either forwards traffic directly or leverages the operating system packet filtering layer.
kube-proxy can run in three different modes: iptables, ipvs, and userspace (a deprecated mode that is not recommended for use). iptables, the default mode, is suitable for clusters of moderate size, however it uses sequential network rules which can impact routing performance. ipvs can support a large number of services, as it supports parallel processing of network rules.
Kube-proxy can run on each and every node and can do simple TCP/UDP packet forwarding across backend network service.
So basically, it is a network proxy that reflects the services as configured in Kubernetes API on each node.
So, the Docker-linkable compatible environment variables provide the cluster IPs and ports which are opened by proxy.
K8s cluster can have multiple worker nodes and each node has multiple pods running, so if one has to access this pod, they can do so via Kube-proxy.
kube-proxy is a network proxy that runs on each node in your cluster, implementing part of the Kubernetes Service concept.
In order to access the pod via k8s services, there are certain network policies, that allow network communication to your Pods from network sessions inside or outside of your cluster. These rules are handled via kube-proxy
kube-proxy has an intelligent algorithm to forward network traffics required for pod access which minimized the overhead and makes service communication more performant
So far we have seen that these 3 processes need to be installed and running successfully within your worker nodes in-order to manage your containerized application efficiently, but the bigger question is
Who manages these worker nodes, to ensure that they are always up and running?
How does the K8s cluster know which pods should be scheduled and which one should be dropped or restarted?
How does the k8s cluster know the resource level requirements of each container app?
Well the answer lies in the concept of Master Node, let’s explore the same below