low disk watermark [??%] exceeded on

I use Elasticsearch 1.4.4 in my development machine (a single notebook). Everything is set as default because I never changed any settings.

When I start it, I usually get the following message:

[2015-10-27 09:38:31,588][INFO ][node                     ] [Milan] version[1.4.4], pid[33932], build[c88f77f/2015-02-19T13:05:36Z]
[2015-10-27 09:38:31,588][INFO ][node                     ] [Milan] initializing ...
[2015-10-27 09:38:31,592][INFO ][plugins                  ] [Milan] loaded [], sites []
[2015-10-27 09:38:34,665][INFO ][node                     ] [Milan] initialized
[2015-10-27 09:38:34,665][INFO ][node                     ] [Milan] starting ...
[2015-10-27 09:38:34,849][INFO ][transport                ] [Milan] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/10.81.1.108:9300]}
[2015-10-27 09:38:35,022][INFO ][discovery                ] [Milan] elasticsearch/DZqnmWIZRpapZY_TPkkMBw
[2015-10-27 09:38:38,787][INFO ][cluster.service          ] [Milan] new_master [Milan][DZqnmWIZRpapZY_TPkkMBw][THINKANDACT1301][inet[/10.81.1.108:9300]], reason: zen-disco-join (elected_as_master)
[2015-10-27 09:38:38,908][INFO ][http                     ] [Milan] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/10.81.1.108:9200]}
[2015-10-27 09:38:38,908][INFO ][node                     ] [Milan] started
[2015-10-27 09:38:39,220][INFO ][gateway                  ] [Milan] recovered [4] indices into cluster_state
[2015-10-27 09:39:08,801][INFO ][cluster.routing.allocation.decider] [Milan] low disk watermark [15%] exceeded on [DZqnmWIZRpapZY_TPkkMBw][Milan] free: 58.6gb[12.6%], replicas will not be assigned to this node
[2015-10-27 09:39:38,798][INFO ][cluster.routing.allocation.decider] [Milan] low disk watermark [15%] exceeded on [DZqnmWIZRpapZY_TPkkMBw][Milan] free: 58.6gb[12.6%], replicas will not be assigned to this node
[2015-10-27 09:40:08,801][INFO ][cluster.routing.allocation.decider] [Milan] low disk watermark [15%] exceeded on [DZqnmWIZRpapZY_TPkkMBw][Milan] free: 58.6gb[12.6%], replicas will not be assigned to this node
....

I see a lot of these "low disk watermark ... exceeded on..." messages. What went wrong in my case? How to fix it? Thanks!

UPDATE

Before this post, I searched SO for related posts. I found one related to "high watermark..." and in that case, the disk space is low. In my case, I checked and there is still 56GB left on my disk.

UPDATE

According to the input from Andrei Stefan, I need to change settings. Should I do it the following way:

curl -XPUT localhost:9200/_cluster/settings -d '{
    "transient" : {
        "cluster.routing.allocation.disk.threshold_enabled" : false
    }
}'

Or is there any settings file I can edit to set it?


Solution 1:

If you like me have a lot of disk you can tune the watermark setting and use byte values instead of percentages:

NB! You can not mix the usage of percentage values and byte values within these settings. Either all are set to percentage values, or all are set to byte values.

Setting: cluster.routing.allocation.disk.watermark.low

Controls the low watermark for disk usage. It defaults to 85%, meaning ES will not allocate new shards to nodes once they have more than 85% disk used. It can also be set to an absolute byte value (like 500mb) to prevent ES from allocating shards if less than the configured amount of space is available.

Setting: cluster.routing.allocation.disk.watermark.high

Controls the high watermark. It defaults to 90%, meaning ES will attempt to relocate shards to another node if the node disk usage rises above 90%. It can also be set to an absolute byte value (similar to the low watermark) to relocate shards once less than the configured amount of space is available on the node.

Setting:: cluster.routing.allocation.disk.watermark.flood_stage

Controls the flood stage watermark. It defaults to 95%, meaning that Elasticsearch enforces a read-only index block (index.blocks.read_only_allow_delete) on every index that has one or more shards allocated on the node that has at least one disk exceeding the flood stage. This is a last resort to prevent nodes from running out of disk space. The index block must be released manually once there is enough disk space available to allow indexing operations to continue.

https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-cluster.html#disk-based-shard-allocation

Please note:

Percentage values refer to used disk space, while byte values refer to free disk space. This can be confusing, since it flips the meaning of high and low. For example, it makes sense to set the low watermark to 10gb and the high watermark to 5gb, but not the other way around.

On my 5TB disk I've set:

# /etc/elasticsearch/elasticsearch.yml
cluster.routing.allocation.disk.threshold_enabled: true
cluster.routing.allocation.disk.watermark.flood_stage: 5gb
cluster.routing.allocation.disk.watermark.low: 30gb
cluster.routing.allocation.disk.watermark.high: 20gb

Edit: Added cluster.routing.allocation.disk.watermark.flood_stage as pr other answer.

Solution 2:

I know it is old post, but my comment can make someone happy. In order to specify watermark in bytes values (gb or mb) you have to add cluster.routing.allocation.disk.watermark.flood_stage to your elasticsearch settings file - elasticsearch.yml. Complete example:

  cluster.routing.allocation.disk.threshold_enabled: true 
  cluster.routing.allocation.disk.watermark.flood_stage: 200mb
  cluster.routing.allocation.disk.watermark.low: 500mb 
  cluster.routing.allocation.disk.watermark.high: 300mb

Note: without specifying cluster.routing.allocation.disk.watermark.flood_stage it will not work with bytes value (gb or mb)

Solution 3:

In my case - I had just to turn off threshold:

run ElasticSearch:

elasticsearch

On other tab run:

curl -XPUT -H "Content-Type: application/json" http://localhost:9200/_cluster/settings -d '{ "transient": { "cluster.routing.allocation.disk.threshold_enabled": false } }'


curl -XPUT -H "Content-Type: application/json" http://localhost:9200/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}'

macOS Catalina, ElasticSearch installed via Brew.