Service Mesh

<< Back to Technical Glossary

Service Mesh Definition

A service mesh provides container applications based on microservices architecture with services such as networking, load balancing, monitoring, and security. The service mesh forms a fabric connecting each container and microservice so they can communicate and interoperate securely.

Diagram depicts service mesh from the end-user to the cloud then to the service mesh controller with container clusters.

FAQs

What Is A Service Mesh?

A service mesh for microservices is a layer of communication infrastructure that efficiently handles the service discovery, traffic management, security authentication, and authorization for container-based applications. A service mesh allows for the deployment of applications where the underlying connectivity and network services become an essential part of the applications and the interactions between application components.

Why Use A Service Mesh?

Service mesh solutions provide:

    • Faster development, testing and deployment of applications.
    • More efficient and quick application updates including support for Blue-Green and Canary deployments.
    • More effective management of network services.
    • The ability to create modular container-based apps that can be deployed as part of the CI/CD pipeline.
    • Greater performance and security of microservices applications.

How Does A Service Mesh For Microservices Work?

When monolithic applications are refactored into hundreds or thousands of microservices, containers provide a great computing model to improve the speed of developing, deploying and scaling applications. However, the functional decomposition of monolithic applications into container-based microservices applications introduces challenges of connectivity, traffic management, security, and performance. It is too costly and impractical to deliver services to thousands of containers using discrete appliances for load balancing, performance monitoring, and security.

The solution is a service mesh for microservices — a new way to deliver application networking services that cannot be achieved with traditional appliance-based solutions.

Service mesh is deployed in a variety of ways:

    • Service proxy per node: Every node in the container cluster has its own service proxy. All traffic to and from application instances on the node always go through the local service proxy.
    • Service proxy per application: Every application (microservice) has its own service proxy. Application instances access their own service proxy.
    • Service proxy per application instance: Every application instance (container) has its own “sidecar” proxy.

Can A Service Mesh Be Delivered By Appliances?

The short answer is no. A service mesh is a new way to deliver services such as load balancing, traffic management, performance monitoring and security policies. These services cannot be offered through a discrete appliance as containers are ephemeral in nature and a typical application nowadays can be made up of 100s if not 1000s of containers. The application services are often integrated from within the compute cluster and only delivered by application programming interfaces (APIs). By contrast, configuring a physical hardware appliance load balancer for each container or microservice application would be inoperable or cost-prohibitive. This is where a hardware appliance-based load balancing solution (including its virtual edition based on similar decades-old architecture) is simply impractical.

A service mesh gives organizations a flexible framework for an array of network services. It works alongside containers and manages services by removing the operational complexity associated with modern microservices applications.

A service mesh architecture is made of proxies that serve as gateways to each interaction that occurs between containers. The proxy accepts the connection and spreads the load across the service mesh. Therefore, the concept of a “mesh” looks like an illustration of the many connections because they create a woven service mesh of microservices.

How Does Kubernetes Service Mesh Work?

A central controller for service mesh integrates with the Kubernetes platform, which is an open source system for automating the deployment and management of containerized applications. Istio and Envoy are active open source projects that help to deliver service mesh capabilities for the Kubernetes environment. The central control plane (Istio) orchestrates the connections and tells the proxies (Envoy instances) deployed alongside containers to implement access control and collects performance metrics.

Does Avi Offer A Service Mesh?

Yes. The Avi Vantage Platform delivers multi-cloud application services such as load balancing, monitoring and security for containerized applications with microservices architecture through dynamic service discovery, application maps, and micro-segmentation. Avi’s Universal Service Mesh is optimized for North-South (inbound and outbound) and East-West (usually within the datacenter) traffic management, including local and global load balancing. Avi integrates with OpenShift and Kubernetes for container orchestration and security.

For more information on service mesh see the following resources: