How to setup STONITH in a 2-node active/passive linux HA pacemaker cluster?

This is a slightly older question but the problem presented here is based on a misconception on how and when failover in clusters, especially two-node clusters, works.

The gist is: You can not do failover testing by disabling communication between the two nodes. Doing so will result in exactly what you are seeing, a split-brain scenario with additional, mutual STONITH. If you want to test the fencing capabilities, a simple killall -9 corosync on the active node will do. Other ways are crm node fence or stonith_admin -F.

From the not quite complete description of your cluster (where is the output of crm configure show and cat /etc/corosync/corosync.conf?) it seems you are using the 10.10.10.xx addresses for messaging, i.e. Corosync/cluster communication. The 172.10.10.xx addresses are your regular/service network addresses and you would access a given node, for example using SSH, by its 172.10.10.xx address. DNS also seems to resolve a node hostname like node1 to 172.10.10.1.

You have STONITH configured to use SSH, which is not a very good idea in itself, but you are probably just testing. I haven't used it myself but I assume the SSH STONITH agent logs into the other node and issues a shutdown command, like ssh root@node2 "shutdown -h now" or something equivalent.

Now, what happens when you cut cluster communication between the nodes? The nodes no longer see each node as alive and well, because there is no more communication between them. Thus each node assumes it is the only survivor of some unfortunate event and tries to become (or remain) the active or primary node. This is the classic and dreaded split-brain scenario.

Part of this is to make sure the other, obviously and presumably failed node is down for good, which is where STONITH comes in. Keep in mind that both nodes are now playing the same game: trying to become (or stay) active and take over all cluster resources, as well as shooting the other node in the head.

You can probably guess what happens now. node1 does ssh root@node2 "shutdown -h now" and node2 does ssh root@node1 "shutdown -h now". This doesn't use the cluster communication network 10.10.10.xx but the service network 172.10.10.xx. Since both nodes are in fact alive and well, they have no problem issuing commands or receiving SSH connections, so both nodes shoot each other at the same time. This kills both nodes.

If you don't use STONITH then a split-brain could have even worse consequences, especially in case of DRBD, where you could end up with both nodes becoming Primary. Data corruption is likely to happen and the split-brain must be resolved manually.

I recommend reading the material on http://www.hastexo.com/resources/hints-and-kinks which is written and maintained by the guys who contributed (and still contribute) a large chunk of what we today call "the Linux HA stack".

TL;DR: If you are cutting cluster communication between your nodes in order to test your fencing setup, you are doing it wrong. Use killall -9 corosync, crm node fence or stonith_admin -F instead. Cutting cluster communication will only result in a split-brain scenario, which can and will lead to data corruption.


You could try adding auto_tie_breaker: 1 into the quorum section of /etc/corosync/corosync.conf

When ATB is enabled, the cluster can suffer up to 50% of the nodes failing at the same time, in a deterministic fashion. The cluster partition, or the set of nodes that are still in contact with the node that has the lowest nodeid will remain quorate. The other nodes will be inquorate.