Cloudreach Blog

Catch the latest stuff

Bridging Multi Cloud Deployments with Kubernetes

Jeremy Bartosiewicz 27th September 2017
Kubernetes - bridging cloud

A Global Cloud Professional Services Company working with another Global Professional Services Company on bridging multi cloud deployments with Kubernetes… What could possibly go wrong? Might we all get stuck aimlessly in workshops evaluating the hundreds of tools available on the market only to reach a stalemate? Might we spend so much time evaluating tools that do not actually assist in reaching our goal?

Almost, but surprisingly, no. At Cloudreach we approach projects differently. We are after all, a cloud-native company. The global Kubernetes community is growing like wildfire, and as is the toolset supporting the platform. As such, there are a lot of ideas that require Proof of Concepts and evaluation. It is a privilege to work with so many capable minds across the globe and be able to feed back into what seems like a new page in the book of application delivery to production.

Making the right selection of tooling, with the goal of “bridging clouds”; to achieve a provider agnostic application deployment using Kubernetes is no mean feat. Making this new Kubernetes based platform enterprise ready was just the icing on the cake. Azure, GCP, AWS? With Kubernetes our applications can love all of you without actually caring about who you are!

What is Kubernetes?

Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. Designed on the same principles that allows Google to run billions of containers a week…”

Yes, you did read that right. A Google designed open source platform where the goal is to use it in Azure, AWS and GCP in order to allow developers to deploy apps to any cloud platform without having to modify the application significantly. The developer simply needs to commit to a life of docker and death by yaml.

What does enterprise ready mean for our new platform build?

An enterprise ready cluster in the context of our customer should take into account the following requirements:

  • Stable clusters with thorough functional and regression testing. There is a significant release of Kubernetes every 3 – 6 months.
  • Scalable clusters. However, depending on the networking component and cloud provider used for the cluster architecture, there may be limitations on how scalable the cluster is.
  • Network Isolation. By default, applications deployed within the Kubernetes cluster are able to cross communicate. In a multi tenanted environment, we need to ensure strict and enforced tenant isolation with adequate network policies are in place.
  • Data Isolation. In a multi tenanted environment, we also need to ensure that each tenant’s data is well isolated, and encrypted at rest as well as in transit.
  • Secure build. Our deployed applications need to have minimal known vulnerabilities. Vulnerability scanning of containers is an ongoing debate. We also need a secret store which can be used by any stack build / deployment.
  • Automation. Ideally our cluster is built in each cloud platform in a fully automated and repeatable way, as are the supporting tools / components.
  • Business Continuity is achieved mainly via internalised dependencies and automated cluster build. Public docker containers can disappear at any point in time. In addition, if the cluster fails we need to be able to rebuild from mirrored registries. Backup and DR strategy play a key role.
  • Multi region deployment. Our clusters need to be able to communicate securely on a secure private backbone network between cloud providers and regions of the relevant provider.
  • Monitoring. We need visibility into both our clusters and deployed applications.
  • Future-proof. We need to be able to follow the regular releases with some velocity as the space is continuously evolving.

Which platform components and tools did we settle on?

  • Automation We used kops and Terraform for Cluster build automation, with tweaks to kops of course. This helped us meet some of our enterprise ready criteria.
  • AutomationJenkins can be used to orchestrate our cluster build methodology wrapping around kops, plus day to day operation jobs.
  • Application / Kubernetes package managementHelm is the package manager for Kubernetes with templated yaml which represents templated applications.
  • Secure Build and Business Continuity Quay and Clair is a highly available private docker registry with integrated vulnerability analysis.
  • Secure Build and Secret StoreHashicorp Vault + Consul were also used as a highly available and secure secret store which can be built in any of our providers.
  • Network Isolation Calico for Kubernetes is a network plugin for Kubernetes which allows for a network policy controller to help enforce tenant isolation using policy and Kubernetes namespaces.
  • Cluster ScalabilityCalico for Kubernetes also handles inter-cluster networking itself. If making use of other plugins, the Kubernetes cluster may have limitations to how far it can scale. With route reflectors implemented alongside our cluster build, we are able to produce a stable cluster in excess of 1000 nodes.
  • Data Isolation – can be used for backing storage services which are cloud specific; supporting both encryption of data at rest and in transit (e.g. EBS encryption and KMS or Azure Key Vault).
  • MonitoringNew Relic infrastructure monitoring, Splunk and Kubernetes dashboard can be used.
  • Multi region deployment – we rely on our automation and enterprise Transit VPC implementation in AWS. This in itself, is a topic in its own right.

How does Kubernetes help?

Kubernetes is a clustered docker orchestration platform and is therefore able to manage fleets of containers as well as abstract cloud / provider native components (be they load balancers, instances, storage volumes, etc …) as components within the platform. In this platform you have new primitives: deployments (collections of pods and containers), services (VIP’s and ELB’s), persistent volumes, ingress controllers, daemon sets, etc … A developer is therefore able to form a system using these primitives, defining the target infrastructure as YAML, which Kubernetes abstracts and translates into cloud native services depending on where it is running. As a result, you can bridge your application deployments across multiple cloud providers using Kubernetes successfully.

Can I have more details? Sure, but not in this blog post, for each point can be a lengthy topic in its own right. Comment on this post to find out more should you want to explore this further.

How far did we get our platform?

Within 6 months we have achieved all of our goals for one cloud provider, helped upskill and build a cluster operations team, a platforms team, whilst using cutting edge technologies and products. We are now able to provide a quick deployment path to production for developers using the platform, using a well established promotions model from dev – stage – prod.

Developers are happier than before, building containers and YAML manifests in order to deploy applications with further increased speed and agility, knowing that what they build is what ends up in production. A full site deployment can be achieved to an environment within 2-3 minutes. (Great demo shop here -> https://github.com/microservices-demo/microservices-demo).

Next stop? Cloud provider numbers 2 and 3. Will it work? We think so, because anything that has been implemented has been done so with multiple cloud providers in mind from the very beginning of the project. As time progresses, so do Kubernetes cluster federation features…