-
Notifications
You must be signed in to change notification settings - Fork 25.4k
Open
Labels
:Distributed Coordination/AllocationAll issues relating to the decision making around placing a shard (both master logic & on the nodes)All issues relating to the decision making around placing a shard (both master logic & on the nodes)>enhancementSupportabilityImprove our (devs, SREs, support eng, users) ability to troubleshoot/self-service product better.Improve our (devs, SREs, support eng, users) ability to troubleshoot/self-service product better.Team:Distributed CoordinationMeta label for Distributed Coordination teamMeta label for Distributed Coordination team
Description
If a shard is allocated to an unexpected node, allocation explain may indicate that can_remain_on_current_node: yes
, but it does not return the node allocation decisions for the current node (even if ?include_yes_decisions
is specified). This is unhelpful, because users need to see the explanation
field of those decisions in order to understand why Elasticsearch believes the shard can remain on its current node.
Example
{
"index": "test",
"shard": 0,
"primary": true,
"current_state": "started",
"current_node": {
"id": "w30fp8P4Tiq-fQPWiUne5Q",
"name": "instance-0000000028",
"transport_address": "172.27.200.205:19319",
"attributes": {
"server_name": "instance-0000000028.c585b8ad1d2c4641bad3e09df10a99ee",
"xpack.installed": "true",
"transform.config_version": "10.0.0",
"ml.config_version": "12.0.0",
"data": "hot",
"logical_availability_zone": "zone-1",
"availability_zone": "eu-west-1a",
"instance_configuration": "aws.data.highio.i3",
"region": "eu-west-1"
},
"roles": [
"data_content",
"data_hot",
"ingest",
"master",
"remote_cluster_client",
"transform"
],
"weight_ranking": 1
},
"can_remain_on_current_node": "yes",
"can_rebalance_cluster": "yes",
"can_rebalance_to_other_node": "no",
"rebalance_explanation": "This shard is in a well-balanced location and satisfies all allocation rules so it will remain on this node. Elasticsearch cannot improve the cluster balance by moving it to another node. If you expect this shard to be rebalanced to another node, find the other node in the node-by-node explanation and address the reasons which prevent Elasticsearch from rebalancing this shard there.",
"node_allocation_decisions": [
{
"node_id": "TYSP3YlyQbSoEZciMcr0Kg",
"node_name": "instance-0000000027",
"transport_address": "172.27.207.72:19885",
"node_attributes": {
"server_name": "instance-0000000027.c585b8ad1d2c4641bad3e09df10a99ee",
"transform.config_version": "10.0.0",
"xpack.installed": "true",
"ml.config_version": "12.0.0",
"data": "hot",
"logical_availability_zone": "zone-0",
"availability_zone": "eu-west-1b",
"instance_configuration": "aws.data.highio.i3",
"region": "eu-west-1"
},
"roles": [
"data_content",
"data_hot",
"ingest",
"master",
"remote_cluster_client",
"transform"
],
"node_decision": "worse_balance",
"weight_ranking": 1,
"deciders": [
{
"decider": "max_retry",
"decision": "YES",
"explanation": "shard has no previous failures"
},
{
"decider": "replica_after_primary_active",
"decision": "YES",
"explanation": "shard is primary and can be allocated"
},
{
"decider": "enable",
"decision": "YES",
"explanation": "all allocations are allowed"
},
{
"decider": "index_version",
"decision": "YES",
"explanation": "can relocate primary shard from a node with index version [8.17.0-8.17.4] to a node with equal-or-newer index version [8.17.0-8.17.4]"
},
{
"decider": "node_version",
"decision": "YES",
"explanation": "can relocate primary shard from a node with version [8.17.4] to a node with equal-or-newer version [8.17.4]"
},
{
"decider": "snapshot_in_progress",
"decision": "YES",
"explanation": "no snapshots are currently running"
},
{
"decider": "restore_in_progress",
"decision": "YES",
"explanation": "ignored as shard is not being recovered from a snapshot"
},
{
"decider": "node_shutdown",
"decision": "YES",
"explanation": "no nodes are shutting down"
},
{
"decider": "node_replacement",
"decision": "YES",
"explanation": "there are no ongoing node replacements"
},
{
"decider": "filter",
"decision": "YES",
"explanation": "node passes include/exclude/require filters"
},
{
"decider": "same_shard",
"decision": "YES",
"explanation": "this node does not hold a copy of this shard"
},
{
"decider": "disk_threshold",
"decision": "YES",
"explanation": "enough disk for shard on node, free: [29gb], used: [3%], shard size: [5.2mb], free after allocating shard: [29gb]"
},
{
"decider": "throttling",
"decision": "YES",
"explanation": "below shard recovery limit of outgoing: [0 < 2] incoming: [0 < 2]"
},
{
"decider": "shards_limit",
"decision": "YES",
"explanation": "total shard limits are disabled: [index: -1, cluster: -1] <= 0"
},
{
"decider": "awareness",
"decision": "YES",
"explanation": "node meets all awareness attribute requirements"
},
{
"decider": "data_tier",
"decision": "YES",
"explanation": "index has a preference for tiers [data_content] and node has tier [data_content]"
},
{
"decider": "ccr_primary_follower",
"decision": "YES",
"explanation": "shard is not a follower and is not under the purview of this decider"
},
{
"decider": "searchable_snapshots",
"decision": "YES",
"explanation": "decider only applicable for indices backed by searchable snapshots"
},
{
"decider": "searchable_snapshot_repository_exists",
"decision": "YES",
"explanation": "this decider only applies to indices backed by searchable snapshots"
},
{
"decider": "searchable_snapshots_enable",
"decision": "YES",
"explanation": "decider only applicable for indices backed by searchable snapshots"
},
{
"decider": "dedicated_frozen_node",
"decision": "YES",
"explanation": "this node's data roles are not exactly [data_frozen] so it is not a dedicated frozen node"
},
{
"decider": "archive",
"decision": "YES",
"explanation": "decider only applicable for indices backed by archive functionality"
}
]
}
]
}
Metadata
Metadata
Assignees
Labels
:Distributed Coordination/AllocationAll issues relating to the decision making around placing a shard (both master logic & on the nodes)All issues relating to the decision making around placing a shard (both master logic & on the nodes)>enhancementSupportabilityImprove our (devs, SREs, support eng, users) ability to troubleshoot/self-service product better.Improve our (devs, SREs, support eng, users) ability to troubleshoot/self-service product better.Team:Distributed CoordinationMeta label for Distributed Coordination teamMeta label for Distributed Coordination team