Today, Service Meshes in the IT world are becoming an integral part of the cloud-native stack. A large cloud application may require hundreds of microservices and serve a million users concurrently. Service Mesh is a low-latency infrastructure layer that allows high traffic communication between different components of a cloud application like backend, frontend and database. Most of this is done via Application Programming Interfaces (APIs).
Some examples of open source Service Meshes like Linkerd, Istio and Kuma are ways to control how different parts of an application share data with each other. Service Meshes provide an overarching view of your service and aid with complex activities like tests, roll-outs, access restrictions and end-to-end authentication.
Service Mesh helps you push operational issues into the infrastructure so the application code is easier to understand, maintain and adapt. Service Mesh integration, explains how to manage traffic management, security and policy to manage a microservice architecture.
So, what does it mean? Service Mesh in the real case and what will it solve. Let’s look at some examples:
Applications are monolithic
Typically applications are monolithic, meaning it is one program, built as one binary and ran as one process.
Example of a Monolithic application
You got a mesh of services and the following questions that you will have to solve:
- Scale and Release. Scaling a monolithic application is difficult and difficult to maintain some autoscaling. Lots of teams trying to make changes at the same time, and get your own version of many bugs.
- Reduced flexibility of technology. Trying new technologies (e.g., a new programming language) without changes the entire code base to them is often problematic.
- Sprints into team dynamics. Last but not least, it’s harder to draw boundaries of responsibility, assign roles and develop a team when you only have one big result.
Many of these issues are only a problem when a particular scale is reached. In most cases a monolithic application is a reasonable choice, again, this depends on requirements. However, you have started to scale and this is no longer a choice for you.
Microservices architecture helps developers modify application services without redeployment. The difference in the different architectures of the applications being developed is that individual microservices are created by small teams with a choice of their own tools and coding languages. In general, microservices are created independently of each other, communicate with each other, and can individually fail without causing disruption to the entire application.
A typical application is like a monolith, it consists of several logical modules: such as frontend, backend, database and so on. Communication between services is what makes microservices unique. The logic that controls communication can be encoded in each service without a service mesh layer, but as communication becomes more complex, the service mesh becomes more valuable. For cloud applications built in a microservices architecture, service meshing is a way to combine a large number of individual services into a functional application.
Example microservices architecture
Moving to microservices, you will get the following benefits:
- You can develop each service independently of the other.
- You can scale each service independently.
- You are free to write each service using the technology of your choice, as long as they use the same communication interface between them.
Although the Service Mesh architecture is not limited to microservice-based systems, they provide a good example of how the service mesh is used in action.
You could shift the mentioned problems to the microservices by giving them additional, infrastructure-specific logic. This would then mean the following:
- You will be forced to recreate this logic for every technology stack you use.
- The application would expand considerably beyond its business needs. So do developers want to work on the application or on the infrastructure?
So you need some sort of software stack that modernizes your deployment and allows your services to discover each other, control traffic and policies, and provide observability, ideally without modifying the services themselves. This infrastructure solution is called the Service Mesh.
How can the integration of the service mesh optimize communication?
Each service that is added to an application, or a new instance of an existing service running in a container, complicates the communication environment and creates new points of potential failure. In a complex microservices architecture, it can become nearly impossible to identify where problems are occurring without the Service Mesh.
This happens because the Service Mesh also captures every aspect of communication between services in the form of performance metrics. For example, if a particular service fails, the Service Mesh can collect data on how long it took before a successful retry. As failure time data is accumulated for a given service, rules can be written to determine the optimal wait time before re-requesting that service, so that the system is not overwhelmed by unnecessary re-requests.
Implementing a Service Mesh to manage system communications can accelerate microservice deployment by providing a consistent approach that manages key network functions. The Service Mesh architecture is ideal for systems with a large number of services because it allows network issues to be separated from the application code and policies to be applied from a central source of truth, either universally or selectively based on specified criteria.
I hope this explanation can help you to choose the right mental model for understanding the source of the problems that Service Mesh solves. In the next article, I would like to explain how the AWS App Mesh can be used, as well as the pros and cons.
Did you find this article valuable?
Support Timur Galeev by becoming a sponsor. Any amount is appreciated!