Many early adopters think DevOps is a set of software development practices and organizational processes that are better suited to modern cloud architectures than legacy ways of doing things. This is true. But, DevOps can also be much more.
DevOps is a revolutionary approach that asks developers, IT teams and other stakeholders to accept many kinds of change. They’ll adopt more efficient practices, shift their mindsets to become more open to innovation and work to break down silos within the enterprise. They’ll also implement new tools and technologies.
Service meshes are among these. They’re quickly becoming an essential part of today’s cloud native microservices-based architectures. This is partially because they’re a natural fit for the ways DevOps pipelines build and release code, and in part because they simplify communication within applications and enhance overall security and reliability.
What is a service mesh?
A service mesh is a software-defined infrastructure layer that enables DevOps practices to control how sets of microservices share data and communicate with one another.
In microservices architectures, an application — which would have been designed as a single monolithic unit in legacy computing environments — is instead built out of a number of small independent modules. Each of these modules, which together are called “services,” is written independently.
Individual services can be built, tested and released more quickly than an entire legacy application, because of their small size and reduced complexity, which supports the core DevOps principle of rapid delivery. But since they’re independent, they can fail individually without breaking the application as a whole. They can also be tested, changed and redeployed while the application continues to run, making it much easier to maintain. This enhances the flexibility, scalability and reliability of the overall application architecture.
A development organization that’s just beginning its cloud journey might have only a few microservices at first. Yet, as a cloud native application’s footprint grows, it will quickly become larger and more complex, incorporating dozens if not hundreds of individual services. In order to provide the functionality a business needs, these services must communicate — requesting data from one another or telling each other when to execute procedures.
Instead of writing individual rules to determine each service’s communication patterns, DevOps teams using a service mesh can automatically configure how requests are routed between all microservices so that they travel efficiently. The service mesh can also collect performance metrics on service-to-service communication and can balance traffic loads to keep individual services from being overwhelmed by too many simultaneous requests.
A service mesh creates an array of network proxies that handle communication among the services. These are secondary, supportive services that run alongside the primary services, working together to form a mesh that passes messages between them. The service mesh can also enforce policies, manage access and authentication and perform service discovery and health checks.
Why DevOps practices rely on service meshes
Service meshes make it easier for DevOps teams to manage cloud native applications in hybrid and multicloud environments. They remove most of the manual work involved in making sure that services can “talk” to one another. They also greatly simplify the process of maintaining that communication infrastructure as the application evolves.
Additionally, deploying a service mesh makes individual microservices much more portable, because the logic governing service-to-service communication is taken out of individual services and abstracted into a universal infrastructure layer. They can be moved to a different server, a new Kubernetes cluster or an entirely different public cloud platform without creating the need to rewrite the entire application.
Adding a service mesh layer will also improve the security of your microservices-based environment. The more complex a microservices architecture becomes, the more network traffic flows within it the larger its potential attack surface. Service meshes mitigate this vulnerability by enforcing security policies across the environment and managing service authentication. They can also automatically encrypt all traffic that passes through the mesh by default.
For the enterprise as a whole, employing a service mesh can make it easier to achieve and maintain compliance with regulatory mandates.
Popular open-source service meshes include Istio and Linkerd, both of which can be deployed on Kubernetes. Linkerd can also be deployed into other container schedulers and frameworks, as can Consul, which provides full-featured controls including service discovery, segmentation and configuration functionalities. All are designed with two-plane architectures. In each, a control plane controls how proxies are configured across the service mesh and collects log data, while a data plane (also known as a sidecar) consists of the proxies themselves.
Moving towards greater DevOps maturity
A service mesh can boost the security and resilience of an individual cluster or an entire microservices-based application. DevOps practices that adopt this technology will have an easier time managing networking and connectivity across applications. They will see more reliable service performance and will be able to build and deploy more quickly and with greater confidence.
However, just like the microservices architectures they support, service meshes can introduce additional complexity and new management concerns into development environments. DevOps teams who are preparing to introduce a service mesh into their architectures must have a strong grasp of the fundamentals of working with containers and container orchestration. These teams also must be prepared to support an emerging technology that’s changing nearly every week. Because the service mesh landscape is so fluid, in-house teams require adaptability and a willingness to learn if they’re to maximize the value of this communication management layer.
Interested in learning more about how you can enhance the skills and capabilities of your own enterprise’s DevOps team? Discover how our new DevOps-as-a-Service offering will bring you DevOps professionals who care about collaboration and knowledge sharing to serve as embedded resources and immediately accelerate your DevOps maturity.