RabbitMQ keeps messages in memory (memory overflow)
I have RabbitMQ server in cluster (2 nodes). All queues are durable, mirrored and all messagess are set as persistent.
I wrote application to synchronize database changes over RabbitMQ queues.
In most cases queue is empty because consumer can read changes as fast as producer can produce it.
Unfortunately on initial synchronization (when all rows from all tables are transfered) there are lot of messages in queue (10GB for example) waiting to be consumed because reading data from database is in most cases faster than writing. I thought these messagess will be saved to disk but it seems that all messages are stored also in RAM. So after while all RAM is used (no matter how much do I have) and it starts to block publishers.
Do anyone knows why RabbitMQ keeps all durable&persistent messages also in RAM? Is it "by design" feature?
I tried to use different message sizes (from 512kB to 5MB). Result was the same. Also having consumer connected / not connected or setting different QOS to it doesn't make any difference.
Versions: RabbitMQ 3.1.0, Erlang R14B04
Solution 1:
It is a feature, even though your messages are persistent, RabbitMQ will keep things in RAM, the reasoning is that if there's enough RAM there's no need to incur the cost of a disk read. RabbitMQ will swap these out to disk under memory pressure (even when it's not a persistent message).
Your publishers are blocking due to flow control, you can toggle the setting that triggers flow-control with vm_memory_high_watermark
.