Quick introduction to RabbitMQ
RabbitMQ is an open-source message-broker, it receives messages from producers (publishers) and send them to consumers (subscribers). Producers and consumers reside on different machines and do not know each other. Behind the scene the broker keeps messages inside queues. Published messages are not placed directly to a queue, first they are received by an exchange which routes them to the corresponding queue(s) depending on used routing key and exchange type. Then messages are delivered to consumers subscribed to queues, or consumers pull messages from queues on demand.
rabbitmq-basic.png
Project description
RabbitMQ will be used from two .Net Core console applications. One console will play the role of producer, it will generate and send messages to a queue, the second console will subscribe to the queue to receive messages and to process them. In case if there are multiple subscribers at the same time: the produced messages will be equally distributed among all subscribers, thus the messages will be delivered in a round-robin manner.
RabbitMQ setup
RabbitMQ along with its management plugin can be manually installed or started from a docker image using the following command:
docker run --name SDBRabbitMQ -d -p 5672:5672 -p 15672:15672 rabbitmq:management
Note that in this case the string SDBRabbitMQ is the name of the running container and you can choose any other valid container name. The plugin provides web UI tools for monitoring messages and queues, it can be accessed by browsing the URL http://localhost:15672/, try to login with default credentials(user name: guest, password: guest). Additionally, the broker status can be viewed using rabbitmqctl command tool, just find out the id of the RabbitMQ running container.
docker ps -aqf "name=SDBRabbitMQ"
docker exec 68d81919b96d rabbitmqctl status
rabbitmq-setup
Producer application
The producer console creates a connection with the broker, then declares a queue and sends messages in a loop with the approximate rate of two messages per second(rate is assured by using Thread.Sleep(500)). Since the producer is using a default exchange (which is of type direct): the routing key is the same as the queue name (SDBQueue), otherwise, the messages will be lost without any notification.The default exchange is implicitly bound to every queue, with a routing key equal to the queue name. It is not possible to explicitly bind to, or unbind from the default exchange. The main C# source code fragments are listed below:

using System;
using System.Text;
using System.Threading;
using System.Threading.Tasks;
using RabbitMQ.Client;
...
static void Main(string[] args)
{
    var queue = "SDBQueue";

    var factory = new ConnectionFactory() { HostName = "localhost" };
    using (var connection = factory.CreateConnection())
    using (var channel = connection.CreateModel())
    {
        var queueDeclare = channel.QueueDeclare(
                            queue: queue,
                            durable: true,
                            exclusive: false,
                            autoDelete: false,
                            arguments: null);

        var cancellationToken = new CancellationTokenSource();
        Task.Run(async () =>
        {
            var nr = 0;
            while (!cancellationToken.IsCancellationRequested)
            {
                var message = $"Message {++nr}";
                var content = Encoding.UTF8.GetBytes(message);

                await Task.Delay(500);
                channel.BasicPublish(exchange: "",
                                        routingKey: queue,
                                        body: content);
                Console.WriteLine($"send: {message}");
            }
        });

        Console.WriteLine("Press any key to exit.");
        Console.ReadKey();
        cancellationToken.Cancel();
    }
}
...
Producer continuously publishes the messages to the SDBQueue, the result can be checked using management plugin at URL http://localhost:15672/#/queues, notice from the figure below that the number of messages in Ready status constantly increases. Leave the producer application open in order to provide a constant flow of new messages. rabbitmq-management-ui-plugin-check-queue-messages.png
Consumer application
Now, since there is already one running producer console, it makes sense to write and run one consumer. The application will create a connection to the same broker and will begin to consume messages from the queue. The messages will be received at a rate of one message per second(the rate will be assured by using Thread.Sleep(1000) function which will simulate a long-running task).

using System;
using System.Text;
using System.Diagnostics;
using System.Threading;
using RabbitMQ.Client;
using RabbitMQ.Client.Events;
...
static void Main(string[] args)
{
    var queue = "SDBQueue";

    var factory = new ConnectionFactory() { HostName = "localhost" };
    using (var connection = factory.CreateConnection())
    using (var channel = connection.CreateModel())
    {
        var consumer = new EventingBasicConsumer(channel);
        consumer.Received += (model, deliveryArgs) =>
        {
            var message = Encoding.UTF8.GetString(deliveryArgs.Body.ToArray());
            Console.WriteLine($"received: \"{message}\"");
            Thread.Sleep(1000);
        };

        var consumeTag = channel.BasicConsume(queue: queue,
                                autoAck: true,
                                consumer: consumer);
        Console.WriteLine("Press any key to exit.");
        Console.ReadKey();
        channel.BasicCancel(consumeTag);
    }
}
...
At this moment it is supposed that the queue (SDBQueue) has at least a dozen new messages waiting to be consumed. Run the consumer and check again the queue details in the RabbitMQ management plugin, the figure below displays the result.
rabbitmq-management-ui-plugin-run-consumer
Pay attention that the number of queued messages from the top graph has abruptly dropped to zero because all waiting messages were dequeued and received by the consumer. The spike from the second graph shows that the broker was acknowledged about messages delivery. But the consumer is still processing messages one by one(remember that a consumer can handle only one message per second), it has loaded dozens of messages in a fraction of a second but it will take at least N seconds to handle all N messages. Much more, the producer is constantly generating new messages that are also instantly delivered to the consumer. Here rises a simple question: what happens if we close the consumer prematurely before it will process all the loaded(in memory) messages? The answer is that those messages will be lost. One possibility to avoid such behavior is to put the consumer to acknowledge the broker manually for each message arrival. As a result, messages will remain queued on the broker side until the consumer won't decide to notify about receiving. A message will be considered as delivered only after it won't be fully processed. In such a way no messages will be lost if a consumer will crash. The following fragment of C# code demonstrates how to manually acknowledge the broker:

...
static void Main(string[] args)
{
    var queue = "SDBQueue";

    var factory = new ConnectionFactory() { HostName = "localhost" };
    using (var connection = factory.CreateConnection())
    using (var channel = connection.CreateModel())
    {
        var consumer = new EventingBasicConsumer(channel);
        consumer.Received += (model, deliveryArgs) =>
        {
            var message = Encoding.UTF8.GetString(deliveryArgs.Body.ToArray());
            Thread.Sleep(1000);
            channel.BasicAck(deliveryArgs.DeliveryTag, false);
        };

        var consumeTag = channel.BasicConsume(queue: queue,
                                autoAck: false,
                                consumer: consumer);
        Console.WriteLine("Press any key to exit.");
        Console.ReadKey();
        channel.BasicCancel(consumeTag);
    }

...
Close the old consumer, apply the source code changes, and run the new consumer console(at this moment it is supposed that the queue has newly published messages). This time the graphs from the management plugin will look different, check the figure below:
rabbitmq-management-ui-plugin-consumer-graph-manual-ackn.png
As the graph shows: the number of total queued messages stays high even if the number of messages with Ready status has suddenly decreased to zero. Also, the number of unacknowledged(with Unacked status) messages has sharply raised as the consumer has loaded all available messages and the broker is expected to get notified after every message handling, but until that the messages remain queued.
Multiple consumers
So far there is one working consumer that can process only one message per second while the producer is generating two messages per second, twice as much as the consumer can handle. Hence, every new message will have to wait more time than the previous one. It is obvious that it is necessary to increase the rate of message processing, one easy way is just to scale horizontally the consumer by running more copies of the application. Open two more consumer consoles, as a result, there should be three running consumers. The broker will distribute new messages among all three consumers in a round-robin fashion. The overall message processing rate will increase up to 3 messages per second, which should be enough to cope with the incoming rate of 2 messages per second. The next figure depicts the message distributing mechanism.
distribute-messages-round-robin-manner.png