Share:

Senior Cloud Systems Developer Lead, Craig Bowers, adds another entry to his Kubernetes 101 blog series.  This post is focused on providing an overview of how to interact with the cluster in different ways to deploy pods.

In the third installment of the k8s 101 series, I’m going to be giving an overview on how to interact with the cluster in different ways to deploy pods. This is building off the second blog, so if you haven’t already read it, you can read it here.

I’ve talked about all the components that make up a Kubernetes environment from control plane, to worker nodes, to resources that are deployed to make a k8s cluster functional. You may be wondering to yourself, how do we actually perform the deployments of resources and manage the lifecycle of them? Let me introduce the kubectl utility, pronounced kube-CTL or kube-cuttle. Kubectl is command-line interface for running commands against your cluster. You have to be authenticated and have proper local configs for the utility to work properly, but I’ll touch on that in the next series.

Managing Resources

Typically, for most resources, you generate a YAML file that describes the resource you are creating, commonly referred to as manifests. Deploying your k8s resources via YAML allows you to version control your cluster resources, as well as to redeploy an exact replica of the environment when disaster strikes. In the manifest, the keyword Kind tells k8s what type, or kind, of resource we are creating or manipulating. This is where we would specify Namespace, Pod, ServiceAccount, Role, RoleBinding, etc…

Here’s what foo.yaml would look like to create an Nginx pod:

apiVersion: v1
kind: Pod
metadata:
  labels:
    app: nginx
  name: nginx-pod-foo
  namespace: foo
spec:
  containers:
  - image: nginx
    name: nginx-container-foo

Then I run command:
kubectl create -f foo.yaml

 

This sends a POST request to the API server which will validate the YAML syntax and then try and create/update what you’ve requested. A response will be sent indicating if the resource was successfully created or not.

To validate, I run this command:

kubectl get pods --namespace foo
Output:
NAME              READY STATUS   RESTARTS AGE
nginx-pod-foo   1/1 Running     0 24s

 

If you look closely at these two commands we ran a create and get action against the Pod resource. If you recall from my previous post, I said RBAC policies specified actions (create, get, etc…) on resources (Pods, ServiceAccounts, Role, etc…). In order for me to perform these two commands, I would need proper authorization to do so.

In the foo.yaml under metadata section I specified the name of my Pod and the namespace in which to create it. Notice how the ‘kubectl get’ command has the ‘–namespace’ flag. If I hadn’t included this flag the API would have only returned details from the default namespace and would have shown no pods since none have been deployed to that namespace.

 

Deployments

Great! Now I have a pod deployed, but a single pod isn’t highly available and is a single source of failure. If you wanted to deploy multiple pods of the same container image, you would have to have multiple pod resources defined in your YAML file, with each having a unique name. To support an HA model we should use the Deployment resource, which is a higher level of abstraction from a Pod and considered best practice. You can think of it like a wrapper around the Pod resource and one key differentiator is the replicas parameter. This is a numeric value that you tell k8s how many instances of the pod to deploy.

I have changed foo.yaml to look like this:

 

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  labels:
    app: nginx
  name: nginx-pod-foo
  namespace: foo
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      serviceAccountName: foo-user
      containers:
      - image: nginx
        name: nginx-container-foo

 

Notice the serviceAccountName parameter as well. I’m telling the deployment to launch 3 instances of my Pod and to run as foo-user service account. Kubernetes continuously watches these pods and if one were to crash, it would attempt to launch a new instance to meet the desired state of 3 instances.

 

DaemonSets

In some circumstances you may have the need to run a particular pod on each worker node, but how can we accomplish this when the Scheduler is in control of placing pods on the worker nodes? Kubernetes has a resource called DaemonSet that will deploy a copy of the pod to each worker node. Some use cases for this are cluster storage daemons, logging daemons, and monitoring daemons. A DaemonSet is very similar to the Deployment resource as it’s a higher-level wrapper of a pod. Both have their own controller manager that manages the subtle differences in the two, i.e. deploy the pod to each worker node, or deploy a specific number of pods.

 

Updating the foo.yaml to look like this:

 

apiVersion: apps/v1
kind: DaemonSet
metadata:
  labels:
    app: nginx
  name: nginx-pod-foo
  namespace: foo
spec:
  selector:
    matchLabels:
      app: nginx
  template:
;">    metadata:
">      labels:
        app: nginx
    spec:
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
      containers:
      - name: nginx-container-foo
        image: nginx

 

You can see these are almost identical. With DaemonSets you don’t specify the replicas parameter because your pod will be deployed to every worker node. Notice the tolerations parameter. This tells k8s to NOT run the pod on the master node(s), just the worker nodes. Tolerations are an advanced topic and I encourage you to look into them!

References:

https://kubernetes.io/docs/reference/kubectl/

https://kubernetes.io/docs/concepts/cluster-administration/manage-deployment/

https://kubernetes.io/docs/concepts/workloads/controllers/deployment/

https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/