OverviewMicroservices architecture describes a set of techniques for building a back-end server applications as a set of smaller services. Each microservice has its own bounded context, database, and domain model. Also, each service is developed and deployed independently and runs in its own separate process. Microservice architecture is the opposite of monolithic application design where it is used a single database and any part of the application can invoke another one just by a simple function call. One advantage of microservice architecture is the possibility of granular scaling out services depending on their processing requirements. Suppose that there is a web application used by graphic designers, the application allows them to upload images one by one and apply some beautiful artistic effects. Users have to create a free account to be able to access their basic free functionality, also the site provides a subscription model and users can pay on a monthly or annual basis to unlock some features like bulk image upload, access to dozens of advanced image effects, etc. Bellow is the comparison of two scaling models.
Scaling monolithic application vs. scaling microservices
Scaling out a monolithic application means cloning it entirely, in this case, there will be some application parts that are excessively copied across virtual machines and pointlessly use memory. Contrarily, in a microservices architecture, only the services that require more processing power are scaled.
Inter-service communicationMicroservices have to communicate with each other for data exchange and for propagating changes, the main two types of inter-service communication based on their protocol mechanism are:
- synchronous, based on synchronous protocols like HTTP or gRPC. One service is making a request to another service and is waiting for a response. This approach increases the coupling between services, microservices must be isolated as much as possible and one service must not directly depend on the other. Also if a request to a service, in turn, generates one more request to another service, the original request will create a chain of requests, as a result, this will increase the inter-service dependencies and will add undesirable latency. In this case, if the request is made for demanding some data(for example some business entity properties): it is worth considering to replicate that data(using integration events) into the database of dependent service to avoid a requirement for one extra request.
asynchronous communication type, it can be provided using a message broker that supports AMQP protocol or an event bus. In this case, the service originating the request publishes a message to a queue, and another service is subscribing in order to receive the message. The message broker will place the message in a corresponding queue and will deliver it to the subscriber. If the service originating the message(publisher) needs a response from the destination service: the receiver(subscriber) also has to publish a response message to a separate queue, hence the originator service will subscribe to that separate queue to wait for responses. The figure below depicts how it works: