Multi Micro Services Communication Model

Hey guys, back with a blog about Too Many Micro Services and how to scale the same, when you have 20/50/100 Micro Services.

Why opt for Micro Service?

Creating your product as a group of Micro Services allows you to develop/maintain and scale each independent part of the product separately without the need to change too much dependant code of other modules and services.

Microservice Architecture
Microservice Architecture

Too Many Micro Services?

What are too many micro services? It’s the point where many microservices are trying to communicate with each other and it builds a large graph type network of the same microservices. The scale of thousands of API requests per minute from one service to other, relaying the same data to multiple microservices at the same time for real time analysis would create a very bad architecture and huge load on the system itself.

For example, imagine a university system, where you have multiple services –

  1. Student portal – Handling everything for student centric information like registration to finance and stuff.
  2. Library portal – Handling everything happening inside the library – Book registration, lending and returning the issued books, etc.
  3. Academic portal – Handling everything related to student’s marks, etc.
  4. Hostel portal – Handling everything hostel related such as check in, check out, room number, laundry information etc.

Now imagine that there are thousands of new students registering on the Student Portal at the same moment and hundreds of different portals that needs this information. The same information needs to be communicated to other portals to create their profiles / setup information (In real scenario, it might not be this busy system but imagine a single portal catering to every university/college on this planet). Now there are multiple ways to handle this scale –

Batch Poll and Pull Architecture

Solution Approach

All portals will regularly call the Student Portal’s APIs to poll and fetch new information, on regular intervals such as every 5 minutes to 15 minutes.

Batch Poll and Pull Architecture
Batch Poll and Pull Architecture


When we have thousands of new students and hundreds of portals and poll the service say every 5 minutes, then there is a lag of 5 minutes for the information to be reflected on each of the portal. This may be okay, but in real scenario we like to keep things consistent and near real time updates. If we bring the interval down to say 1 minute, the load on the system would be high as there would be hundreds of API calls to Student Portal and the network bandwidth could be overloaded.

Push and Relay Architecture – Sender’s Responsibility

Solution Approach

The student portal in this case would send an API call to every portal on every new student registration.

Push And Relay Architecture
Push And Relay Architecture


When we have thousands of new students and hundreds of portals, there would be hundreds of calls for every student to each portal and would led to hundreds of thousands calls per minute. If this was not enough, then the responsibility of registering and relaying to every new service made would be of the senders and this would cause huge scalability issues.

At the same time, we might have an alternate to Push and Relay Architecture and can use the concept in a better way. The bottlenecks for above were the HTTP calls and new service registration. HTTP calls are inherently tightly coupled, meaning we will have to wait for each call to get the acknowledgement from the receiver. As making it asynchronous could solve a bit, still waiting for all these network calls will have huge network impact. Also, the sender would need to keep track of who to send the data to and register new services as they are built.

Fire and Forget Architecture

Solution Approach

Instead of making hundreds of API calls different services, let’s make a single network call to a “Broadcasting Channel / System”. Each service will then subscribe to this channel / system. The sender would then “Fire” a message to the channel and “Forget” about it. All services subscribed to this channel would receive the message. Any new service being deployed can subscribe to this channel and start receiving the messages. The sender would never have to worry about who is subscribed or is the message relayed or not. Obviously there can be a Poll and Pull fall back system to any missed messages, but this architecture will make the lives of these microservices very easy to maintain and scale to huge traffic.

Fire And Forget Architecture
Fire And Forget Architecture

Code Example

To demonstrate this, let’s spin up 4 microservices where one will send information and the rest could consume it. To make this a very simple 10 minutes tutorial I have used the following stack, which you can easily customise.

  1. Language – Python
  2. Broadcasting System / Message Queue – Redis Pub/Sub (you can use Kafka, AWS SQS/ EventBridge, RabbitMQ, etc)

I personally like Redis Pub/Sub as it is too easy to setup and spin up the message queue in just 3 lines of code.It is also very scalable and performant for huge Production Setups, but you are free to use Kafka, RabbitMQ or any managed service like AWS SQS as per your wish.

To setup, make sure you have Python 3.6+ (preferably Python 3.8) installed and docker setup on your command line. I am using docker to deploy Redis, but you can setup Redis as native installation using brew/apt-get/curl/installer anything. Just have a Redis server up and running. The following commands are for Unix based systems (Ubuntu/MacOS/etc) but same can be changed a bit for Windows.

Steps to setup project –

  1. Run the following command to start the redis server using docker image of redis.
docker run --name redis -d -p 6379:6379 redis
  1. Change directory to your desired folder and create a virtual environment. Install the redis library and then create 4 python files, each representing a single microservice.
mkdir project
cd project
virtualenv venv -p python3
source venv/bin/activate
pip install redis
  1. Now open these files in any code editor and ad the following code to file. This microservice would be sending the data to all other services.
import redis

server = redis.Redis()

print("App1 started.")
while True:
    msg = input("Enter message - ")
    print(f'[app1] Sending "{msg}" to everyone.')
    server.publish("channel_1", msg)

    if msg == "exit":

As we can see, all we did was import the redis package on Line 1, create a connection to the Redis server, and published every input message to a channel (message queue) named “channel_1” on Line 9. You can name your channel anything, create multiple channels, etc.

  1. Now for every other service (,, insert the following code in it and change Line 17’s print statement to highlight which file is it.
import redis

client = redis.Redis()

client_channel = client.pubsub()

print("Waiting to recieve messages.")
for item in client_channel.listen():
    if item["type"] != "message":
    msg = item["data"].decode("utf-8")
    if msg == "exit":
    # Change the name of file below.
    print(f"[app2] Recieved item -  {msg}")


  1. Now run each file (each command below) in a separate terminal/bash shell/window (Remember to activate the virtual env in each window) –

Now start entering any text from app1 and as you send the message, you can see that each other service is getting the same instantly. This is because each other service has subscribed to “channel_1” and it listens on the channel for each message. When received any message it prints the same.

Code Output Screenshot
Code Output Screenshot
  1. Send exit message to exit all the services.

There, you have created a simple broadcast system. You can customise it, scale it on production level in your code bases, use any other message queue, apply filters to the channel, create and subscribe to multiple channels at once, the ocean is all yours 😀 ❤

Do share if you liked this approach.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Website Powered by

Up ↑

%d bloggers like this: