
Senior Cloud Systems Developer Lead, Craig Bowers, continues his Kubernetes 101 blog series with a post focused core resources and default controllers.
In the second installment of the Kubernetes 101 series, I’m going to take you on a deeper dive of the k8s resources that live within the environment. This is building off the first blog, so if you haven’t already read it, you can catch it here
While there are many resource types that can be deployed into a k8s cluster, today I’ll be focusing on a few of the core types. These are namespaces, RBAC, configmaps, service accounts, pods, services, and default controllers.
Namespaces
Namespaces are a fundamental concept in k8s and one of the first things created when you have a new cluster. A namespace provides a way of grouping resources, or could be looked at as a barrier. We can think about it like this, let’s say I have an application called ‘foo’. I create a namespace called foo in the k8s cluster. When I deploy my application I can specify to deploy it into the foo namespace. If I know another application called ‘bar’ is deployed in the same cluster, I could try and look at the resources in the bar namespace, but I would receive an error because I only have access to the foo namespace. If you do not specify a namespace there is a default namespace called default that will get used. Namespaces set up the foundation for authorization within a cluster, as most roles and permissions are bound to namespaces.
Role Based Access Control
Role Based Access Control, or RBAC, is how the k8s (version 1.8+) enforces authorization. These are rules implemented as a role that tells k8s who can do what within a cluster. Just like IAM within any cloud platform, you should use the least privileged model when defining roles. Roles come in two flavors: cluster wide and namespaced. Cluster wide roles are good for setting global policies, then you can bind the role to specific namespaces and service accounts. Namespaced scope roles exist only in that namespace and cannot be used outside of that namespace. The policy within the role is the exact same across the two, it’s just a matter of how wide or narrowly they are scoped. In the policy, you specify the resource and what actions can be taken on the resource. A resource is any k8s resource like pods and namespaces which we have already talked about. The actions could be: create, list, watch, delete, etc…
Creating a role doesn’t actually do anything until it’s bound to an entity like a user or service account. A user in k8s is an identity that is external to the cluster, perhaps an IAM account from your Cloud Service Provider, or even AD account if you’re hosting the cluster on-prem. Service accounts are internal to the cluster and give an identity to processes running in the pod. Binding the user or service account to a role is how you control authorization to k8s resources.
I create a cluster role called “foo-role”. This role can get, list, watch, create, and delete pods.
kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: foo-role rules: - apiGroups: ['*'] resources: ['pods'] verbs: ['get','list','watch','create','delete']
Now I will bind this role to my service account in the foo namespace.
kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: foo-role-binding namespace: foo roleRef: kind: Role name: foo-role apiGroup: rbac.authorization.k8s.io subjects: - kind: ServiceAccount name: craig namespace: foo
I can verify my access using the ‘kubectl auth can-i’ command.
kubectl auth can-i get pods --namespace foo --as=system:serviceaccount:foo:craig yes kubectl auth can-i create pods --namespace foo --as=system:serviceaccount:foo:craig yes
Let me try and create a service account and role resource in the foo namespace.
kubectl auth can-i create serviceaccount --namespace foo --as=system:serviceaccount:foo:craig no kubectl auth can-i create role --namespace foo --as=system:serviceaccount:foo:craig no
As I would expect I do not have authorization on any other resources besides pods.
Final test is to see if I can get or create pods in the bar namespace.
kubectl auth can-i get pods --namespace bar --as=system:serviceaccount:foo:craig no kubectl auth can-i create pods --namespace bar --as=system:serviceaccount:foo:craig no
Perfect! Based on the access the k8s admin has granted me, I can only take actions on pods in the foonamespace.
ConfigMaps
ConfigMaps are a means to extract configurable data from your application container and bind to it at runtime. Like most resources, the configmap must be deployed into the same namespace as the pod. A common practice is to define your configmap as a key/value store. Then, within the pod spec you can reference the data from the configmap name and the key name within the configmap. You can even deploy an existing configuration file as a confimap. Where the filename becomes the key name within the configmap, and the files contents is the key’s value. Then you can mount this file within the pod and access it like you normally would when on the file system directly.
An example of application data in ConfigMap:
apiVersion: v1 kind: ConfigMap metadata: name: foo-key namespace: foo data: foo.properties: | env=dev database=dev.database.endpoint:3306 general.properties: | key1=value1 key2=value2
I will then mount the ConfigMap inside the pod. I specify the volumes parameter that references the ConfigMap. Then, within the container parameter, I specify a volumeMounts that maps to the volume spec and I provide path to mount it within the container.
spec: serviceAccountName: craig volumes: - name: application-config configMap: name: foo-key containers: - name: busybox-container image: gcr.io/google_containers/busybox command: - "sleep" - "3600" imagePullPolicy: IfNotPresent volumeMounts: - name: application-config mountPath: /var/secrets/foo
After the container is running you can see the ConfigMap is represented as a file within the container.
/ # ls -l /var/secrets/foo/ lrwxrwxrwx 1 root root 21 May 21 17:41 foo.properties -> ..data/foo.properties lrwxrwxrwx 1 root root 25 May 21 17:41 general.properties -> ..data/general.properties / # cat /var/secrets/foo/foo.properties env=dev database=dev.database.endpoint:3306
If you have sensitive data you can store this in a “secret” which is another resource inside k8s and it can be mounted the same way as a config map. Secrets are only base64 encoded so they are not entirely secure. When creating a secret from file, only the filename is exposed in the secret. You can get to the encoded data inside the file if you have RBAC permissions to do so, but a simple get or describe of the secret is not enough.
Back to Pods
Switching gears back to pods, I said in the first post to think of pods as containers. Pods are actually a grouping of containers. So you can have multiple containers running per pod. However, it is best practice to deploy a single container per pod. One example for running multiple containers in a pod would be for logging, where you would deploy a second pod, referred to as a sidecar, that contains a logging agent. The Pod network is another network within the k8s ecosystem. Each pod gets assigned a single IP from the Pod network range and all containers within a pod will get assigned that same IP. For multiple containers in a single pod, they can communicate via localhost and port. Pods contain many metadata attributes that help identify the pod. One in particular is called labels. Labels are important because they put an identity on the pod and used by other resources to control which pods they interact with. As mentioned in my first post there are several overlay networks and the service network being one. You can refer to that post to better understand the difference between these two.
Services
I’ve talked about the service network a couple of times now, but you still may be wondering what exactly is the service? You can think of the service resource like a way to load balance across pods. When you have multiple pods running across multiple node servers, how does the networking know which pod to route too? This is what the service does. When you create a service for a set of pods in a particular namespace it’s able to get all the pod IP’s that match the criteria, typically labels. The service itself is assigned an IP from the service network range. Another pod can call the service name and be routed to one of the pods behind the service. There are three types of services you can deploy, 1) ClusterIP, NodePort, and LoadBalancer. With ClusterIP the service is assigned an IP that is only addressable within the cluster. Good option for backend services that need to communicate with each other. With NodePort the service is assigned a random port from within a specified range on the worker node itself. The service can be accessed via the node’s IP and whatever port has been assigned. This allows a connection from outside the cluster to become established. The port will be reserved on all worker nodes, so you can use any of the worker nodes IP to make the connection. With LoadBalancer type, this will create a load balancer on the underlying Cloud platform you’re running on and is only supported on Cloud platforms.
Controllers
Wrapping up this series with a deeper dive into the default controllers on the master nodes. As a quick recap, the controllers perform specific functions within the k8s environment. The four default controllers that come out of the box are endpoints, nodes, replication, service account (SA) & token controllers. The endpoint controller joins services with pods. I just talked about the creation of services and now we can see some of the magic that happens under the hood. When a service is created the endpoint controller creates an endpoint object for the service. The endpoint object contains a list of all pod IP’s that are in the service. The service is constantly checking the endpoint object and if an IP changes it updates that IP so it will always be routing traffic to a healthy pod. The node controller is responsible for launching new nodes and checking health status of all nodes. K8s itself cannot really launch new nodes, for this to work it requires some auto scaling configuration on your Cloud providers platform. The replication controller is responsible for maintaining the correct number of pods across the cluster. The SA & Token controller ensures a default service account and API token is generated for each namespace. When you create a service account this controller will also generate an API token for that account. The API token is means for the account to authenticate with the API server, but more on that in a later series. There are many other types of controllers that you will become familiar with as you continue working with Kubernetes.
Stay tuned for future series where I will talk about how you as an end user can interact with the cluster to create and modify all these resources I have been going in depth on, as well as additional resources around deployments of pods.
References:
https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/
https://kubernetes.io/docs/reference/access-authn-authz/rbac/
https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/
https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
https://kubernetes.io/docs/concepts/workloads/pods/
https://kubernetes.io/docs/tutorials/kubernetes-basics/expose/expose-intro/
https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager/