Service Mesh — Part I — Route This, Route That
Service Mesh is the buzz word of 2019. The early adopters of this technology are growing faster.
It is important to understand OSI (Open Systems Interconnection) model, Network Planes, Proxies and Load balancing before one starts learning Service Mesh. The part I series explains the above fundamentals and I will write two more articles in the same topic to cover the complete subject.
What is a Service Mesh?
- A policy-based granular software networking tool.
- A developer driven Software Defined Network (SDN) Architecture.
- An abstraction software layer of Network.
- Decouple the application code from network at Layer 5.
- Use Layer 7 Proxying.
- A logging, metrics, traffic control, tracing and encryption for Microservices.
Why we need Service Mesh?
The prime DevOps tools such as Kubernetes focuses on Container Scheduling, Discovery, Scalability, Application deployments and provisioning. They are mostly Infrastructure oriented capabilities and service levels that covers NFR’s (Non-Functional Requirements).
There is no tool to manage the service levels of Microservices that perform inside of Containers.
Service mesh fills that vacuum.
It’s a dedicated software infrastructure layer for Microservices to make their communication secure, reliable,drive policies and maintain their service levels. Though it is relying on the Orchestrator tool to function, it can also work well in non-container environments.
Let’s start reading about the fundamentals of networking:
OSI (Open Systems Interconnection) Model
In the late 1970’s, ISO and ICCTT came together to develop standards for Data center design especially at Telecom sector. In 1983, it was published as a base reference model for networking infrastructure, since then it has become the bible of data center networking.
Electronics in our lives (TV, Cable, Smart Phones, Desktops, Laptops and home systems) are based upon this OSI Model. You don’t see them because the flow is visible to the eyes of millions of engineers who work behind your apps, games, browser etc.
OSI Model brings discipline to build data centers, provision the network layers and give a common understanding among the network providers to manage their service levels.
The following table summarizes the layers of OSI model.
The above layers require time and effort to understand them. For Service Mesh technologies, let’s focus on Layer 5 and Layer 7.
A solid understanding of physical network planes (L4 Layer) is important to learn Service Mesh because the concepts are derived from the traditional network topologies. Every network engineer at the data center knows these three physical network planes.
Network Planes
A plane is an ‘area of operations’.
- Management Plane
- Control Plane
- Data Plane
The above planes use In-Band and Out-of-Band to manage the devices at the data center.
In-Band — Uses SSH to connect to the remote server and manage. Here the media is the network. If the network is down, you can’t reach the server to resolve the issue.
Out-Of-Band — It is an alternate path or mechanism to reach the remote server. The administrator uses a dedicated channel to reach the server and it is isolated from the regular application/OS administration traffic.
Management Plane
- The connection between the terminal (Ex: Work station) and the remote server/device.
- The administrator uses it to configure, manage and monitor the services that runs on the remote server.
- Both In-Band and Out-Of-Band are supported.
Control Plane
- It defines the topology of a network (Brain of the router).
- It is responsible for establishing links between routers and for exchanging protocol information.
- It’s a decision-making system to decide the best path to deliver the data (Traffic Control).
Data Plane
- It is part of the network that carries user data.
- It forwards user data based on control plane algorithm.
- The data plane is also known as the user plane, the forwarding plane or the carrier plane.
The above network planes are physical and Layer 4 focused.
Service Mesh Planes
Think of applying the same network plane principles to Service Mesh at Layer 7 (Software Defined Networking).
Every service mesh solution must have two planes 1. Service Mesh Data Plane and 2. Service Mesh Control Plane
Service Mesh Data Plane (Proxying Layer)
- The data plane is the sidecar proxy (Example: Envoy, NGINX).
- Every application request has to pass through this plane.
- It is responsible for inter cluster communication, ingress and egress network traffic.
- It is responsible for performing service discovery, health checking, and routing.
- It does load balancing, authentication and authorization.
- It’s an observability plane that collects performance, scalability, security, availability, and other decision-enabling information.
Service Mesh Control Plane
- The control plane monitors, configures, manages, and maintains all the contributing data planes.
- It provides policies and configuration to all the contributing data planes.
Proxies
A proxy is a server that directs clients to access files, contents, media from the distributed systems. It also acts as a security layer in multi-tier web architecture.
There are several types of proxies. Most notable ones are 1. Forward proxy 2. Reverse proxy 3. Application proxy.
Forward proxy
A forward proxy is an Internet facing proxy that allows the internal workstations to connect to the outside Internet servers. They are not used for content delivery; however, they are used to filter websites, files and media that are relevant to business and block harmful sites.
Example: Blocking social media sites at the office network.
Forward proxies are good for:
- Content filtering
- Email security
- Geo restrictions
- Compliance reporting.
Reverse proxy
A reverse proxy is an internal proxy server that allows the client to connect to multiple servers at the backend. It does TCP Multiplexing meaning it pools multiple client requests into one request and supply it to the backend systems. The typical TCP mux ratio is 10:1 — Ten incoming connections become one backend connection.
Another notable benefit is sticky connections. The connections between reverse proxy and the backend servers are persistent. They can be reused for the new client requests hence improve the performance.
Reverse proxies are good for:
- Content Redirection
- Load Balancing (TCP Multiplexing)
- SSL Offload/Acceleration (SSL Multiplexing)
- Sticky Sessions
- Caching
- Application Firewall
- Authentication
- Single Sign On
Application Proxy
Application proxy is very similar to Reverse proxy with additional plug-ins provided by the vendors.
For example, a three tier Web Architecture with Oracle WebLogic uses one of the leading web servers as a front end proxy (Reverse proxy to the backend WebLogic Servers) with Oracle proxy plug-in.
The plug-in has capabilities to load balance, maintain sticky session, interpret SSL requests and also block the client request to the failed WebLogic Server Instance at the backend. It enables the web administrators to add multiple applications at the backend (Context path or directories) without involving the network team.
Hardware Load Balancers
Hardware load balancers exists in the market for more than 22 years. It was year 2000 when we implemented Arrowpoint hardware load balancer at GE, Schenectady.
Hardware load balancers is an advancement of industrial content delivery and website management.
The capabilities of hardware load balancers as follows:
- DNS load balancing
- NAT/PAT
- Act as a reverse proxy for backend server applications
- SSL offload/acceleration
- Sticky Sessions/Persistent Connections
- Priority activation
- Content aware switching
- DDoS attack protection
- Firewall and intrusion detection
- Detect failed servers and stop the user traffic
- Monitoring
Traditionally hardware load balancers belong to Layer 3 or 4 of OSI model. It does balance IP: Logical Port combination. It is not application aware.
In 2008/2009, the concept of Layer 7 load balancing took main stage due to the content delivery requirement. We had several debates in our Infrastructure Architecture meetings about how to utilize Layer 7 load balancing for application HA services.
Layer 7 hardware load balancer is the actual Reverse proxy server. It is heavily application aware.
Network leaders such as F5 networks knows the growing need for L7 based routing, load balancing and security. Recently F5 acquired a startup company called Aspenmesh to include Service Mesh capabilities to their L7 layer product solutions.
Microservices
Microservices — It is an application architecture that structures an application as a collection of light weight services with following capabilities:
- Organized around business capabilities
- Loosely coupled
- Highly maintainable and testable
- Independently deployable
- Generally, Cloud Native
It is opposite to the heavy monolithic applications. The microservices architecture enables the CI/CD, Containers with Orchestration capabilities.
Microservices Monitoring
Traditional IT monitoring focus on IT Operations, largely run time of servers and services. Microservices monitoring made organizations use several monitoring tools to manage their SLA’s. Monitoring Containers, Microservices performance, Monitoring API’s and Security are key factors.
Service Mesh plays a big role in the coming days to monitor the Microservices and report metrics.
This article covers necessary basic concepts to dig through Service Mesh. In the next part, I will explain Service Mesh technologies, tools and its adoption.
Lawrence Manickam is the Founder of Kuberiter Inc, a Seattle based Start-up that provide DevOps Services (Jenkins as a Service, Docker as a Service, Kubernetes as a Service, Helm as a Service and Istio as a Service) for MultiCloud.