Kubernetes 101: Component Architecture
Throughout this series, Sr. Cloud Systems Developer, Craig Bowers, will demystify Kubernetes, commonly referred to as k8s, for you. In this blog, Craig will break down the k8s components.
Let’s start with the basics: What is Kubernetes?
Well, according to the kubernetes.io site, “Kubernetes is a portable, extensible open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation.” (https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/).
Kubernetes is an excellent platform for container management. It offers many features beyond simply running your containerized application, a big one being security. It can be run anywhere, whether your infrastructure is in the Cloud or on-prem.
Kubernetes component breakdown
Now you know what kubernetes is, let’s take a closer look at the components involved. In this first blog, I will break down the k8s components. In the following posts, I will build off this foundation and take you on a deeper dive of the k8s ecosystem.
At the highest level, there is the control plane (AKA the master), and worker nodes. The control plane is the brains of the k8s ecosystem and consists of the API server, the scheduler, controller manager, and most importantly ETCD. These processes, working together, manage the cluster state as a whole. The worker nodes, as you’ve probably already have guessed, is where your containerized workloads run. The worker nodes consist of kube-proxy, kubelet, and container runtime components. Let’s take a look into the control plane and worker nodes.
Every interaction in k8s is a RESTful API call and therefore the API server is the front door to the k8s environment. The API is responsible for storing the API objects in a persistent state. This is where we see ETCD come into the picture. ETCD is a reliable key-value store, typically deployed in a distributed architecture. So, basically, it’s the database backend that stores everything going on within the k8s cluster. The API server is the only component within the k8s ecosystem that interacts with ETCD. Well, technically there are other circumstances, but to keep things simple, we’ll stick with this for now. The API itself is broken up into groups, meaning the interaction of k8s resources will fall into one of these groups. I will show you this in action in an upcoming series.
The scheduler is just what it sounds like, it’s responsible for scheduling pods on the worker nodes. It watches for new pods in pending state and finds a worker node with enough resources to handle the pod specs. There are considerations the scheduler takes when deciding which worker node to schedule the pod onto. The scheduler only schedules the pod to a node, it does not actually run the pod. Wait, all this talk about pods, what are pods? I thought k8s was a container platform? For now, just consider a pod the same thing as a container. A deeper dive into pods will be in the next blog.
The controller manager is a daemon process that manages controllers. The important thing here is understanding a controller’s role within the k8s ecosystem. Think of controllers as being responsible for managing a finite group of resources. For example, a ReplicationController is responsible for making sure the correct number of pods are running within the worker nodes. Its sole job is to know the current and desired state of any given pod and the number of pod instances that should be running across the worker nodes. It’s taking on the same design of microservices where it serves a single function and performs that single function well. This also makes controllers extensible and you can create your own custom controllers when the need arises. The default controllers are replication, nodes, endpoints, and service account & token. The controllers act as the brains of the operation, and more to come on the details of these controllers in a future post.
Now that we understand how the control plane works, let’s take a look at the worker nodes. We have fewer components on this end, but this is where the real work happens, your code and business logic! The kubelet is the downstream process of the scheduler and ReplicationController. Once the scheduler has assigned a node to the pod, the kubelet is responsible for executing the pod in the container engine according to its specs. If the pod were to crash for whatever reason, the kubelet would get the pod running again. The kubelet also manages other interactions with the pod such as retrieving logs, status, metrics, and shelling into the pod.
The kube-proxy component is responsible for managing networking rules on the underlying host operating system for service to service communication. This is usually done via iptables or similar Linux native routing mechanisms. One of the overlay networks within k8s is the service network. This is the network that allows your pod to communicate with another grouping of pods. While I’ll be providing a deeper dive into services and networking in an upcoming series, the thing to note here is the service network itself is an abstraction and isn’t really routable addresses. This is why kube-proxy writes routing rules that will rewrite the destination IP to one of the pod IP’s within that particular service.
Lastly, the container runtime component. This component is probably already familiar to many of you. Two popular container runtimes are Docker and containerd. What these do is simply provide a runtime environment that understands the makeup of your container image and executes it.
Stay tuned for future posts where I will dive into the resources that are deployed into the k8s environment. I will cover authorization, pods, services, default controllers and more!
Read the next blog in this series here.
You can refer to the official Kubernetes components documentation here: