elasticsearch - what to do with unassigned shards
my cluster is with yellow status because some shards are unassigned. what to do with this?
I tried set cluster.routing.allocation.disable_allocation = false
to all indexes, but I think this don't work because I'm using version 1.1.1.
I also tried restarting all machines, but same happens.
Any idea?
EDIT :
-
Cluster stat :
{ cluster_name: "elasticsearch", status: "red", timed_out: false, number_of_nodes: 5, number_of_data_nodes: 4, active_primary_shards: 4689, active_shards: 4689, relocating_shards: 0, initializing_shards: 10, unassigned_shards: 758 }
Solution 1:
There are many possible reason why allocation won't occur:
- You are running different versions of Elasticsearch on different nodes
- You only have one node in your cluster, but you have number of replicas set to something other than zero.
- You have insufficient disk space.
- You have shard allocation disabled.
- You have a firewall or SELinux enabled. With SELinux enabled but not configured properly, you will see shards stuck in INITIALIZING or RELOCATING forever.
As a general rule, you can troubleshoot things like this:
- Look at the nodes in your cluster:
curl -s 'localhost:9200/_cat/nodes?v'
. If you only have one node, you need to setnumber_of_replicas
to 0. (See ES documentation or other answers). - Look at the disk space available in your cluster:
curl -s 'localhost:9200/_cat/allocation?v'
- Check cluster settings:
curl 'http://localhost:9200/_cluster/settings?pretty'
and look forcluster.routing
settings - Look at which shards are UNASSIGNED
curl -s localhost:9200/_cat/shards?v | grep UNASS
-
Try to force a shard to be assigned
curl -XPOST -d '{ "commands" : [ { "allocate" : { "index" : ".marvel-2014.05.21", "shard" : 0, "node" : "SOME_NODE_HERE", "allow_primary":true } } ] }' http://localhost:9200/_cluster/reroute?pretty
Look at the response and see what it says. There will be a bunch of YES's that are ok, and then a NO. If there aren't any NO's, it's likely a firewall/SELinux problem.
Solution 2:
This is a common issue arising from the default index setting, in particularly, when you try to replicate on a single node. To fix this with transient cluster setting, do this:
curl -XPUT http://localhost:9200/_settings -d '{ "number_of_replicas" :0 }'
Next, enable the cluster to reallocate shards (you can always turn this on after all is said and done):
curl -XPUT http://localhost:9200/_cluster/settings -d '
{
"transient" : {
"cluster.routing.allocation.enable": true
}
}'
Now sit back and watch the cluster clean up the unassigned replica shards. If you want this to take effect with future indices, don't forget to modify elasticsearch.yml file with the following setting and bounce the cluster:
index.number_of_replicas: 0
Solution 3:
Those unassigned shards are actually unassigned replicas of your actual shards from the master node.
In order to assign these shards, you need to run a new instance of elasticsearch to create a secondary node to carry the data replicas.
EDIT: Sometimes the unassigned shards belongs to indexes that have been deleted making them orphan shards that will never assign regardless of adding nodes or not. But it's not the case here!
Solution 4:
The only thing that worked for me was changing the number_of_replicas (I had 2 replicas, so I changed it to 1 and then changed back to 2).
First:
PUT /myindex/_settings
{
"index" : {
"number_of_replicas" : 1
}
}
Then:
PUT /myindex/_settings
{
"index" : {
"number_of_replicas" : 2
}
}
Solution 5:
The first 2 points of the answer by Alcanzar did it for me, but I had to add
"allow_primary" : true
like so
curl -XPOST http://localhost:9200/_cluster/reroute?pretty -d '{
"commands": [
{
"allocate": {
"index": ".marvel-2014.05.21",
"shard": 0,
"node": "SOME_NODE_HERE",
"allow_primary": true
}
}
]
}'