MariaDB Galera Cluster Node cannot start after going down
Solution 1:
I had the same problem, and finally after fixing the issue ( on CentOS 7 - MariaDB-server-10.2.0-1 ), I wrote a documentation on how to set it up correctly and I hope it will fix yours too. Follow the instructions below and try to build your Galera nodes from scratch. Notice that I will just use the mandatory configuration, you can add yours to it later. I guess you have missed the fifth step or you haven't done it correctly. Anyway, I will write all the steps so anyone else can find it easier to follow.
The problem arises when you do not use the galera_new_cluster
command on the master node, and you do not use the appropriate address for wsrep_cluster_address
- gcomm. So when the master fails, other nodes cannot come back to the peer. (not even the master can come back in cluster)
Consider 3 servers named GLR{1,2,3}
and we are going to setup Galera Cluster on each. - I will explain how to avoid failure on two-node cluster in the seventh step.
STEP 1
For installation:
Open /etc/yum.repos.d/mariadb.repo
with your favourite editor and add the following lines into it:
[mariadb]
name = MariaDB
baseurl = http://yum.mariadb.org/10.0/centos6-amd64
gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
gpgcheck=1
STEP 2
If you do not know how to manage/configure SELinux, set it to permissive mode and check your log files after finishing the installation to do the required steps for managing it. You might also be in need of having setroubleshoot-server
and setools-console
packages installed to better understand your SELinux logs.
But if you have SELinux enabled and do not want to set it to permissive mode, you should note that it may block mysqld from carrying out required operations. So you should configure it to allow mysqld to run external programs and open listen sockets on unprivileged ports—that is, things that an unprivileged user can do.
Teaching how to manage SELinux is beyond the scope of this answer, but you can set it in permissive mode only for mysql
requests by doing the following command:
semanage permissive -a mysql_t
STEP 3
After installation using yum
, Add the following lines to the end of /etc/my.cnf.d/server.cnf as shown below on each GLR server respectively:
[GLR1] ↴
$ vim /etc/my.cnf.d/server.cnf
[galera]
# Mandatory settings
wsrep_on=ON
wsrep_provider=/usr/lib64/galera/libgalera_smm.so
wsrep_cluster_address='gcomm://{GLR1 IP},{GLR2 IP},{GLR3 IP}'
wsrep_cluster_name='galera'
wsrep_node_address='{GLR1 IP}'
wsrep_node_name='GLR1'
wsrep_sst_method=rsync
binlog_format=row
default_storage_engine=InnoDB
innodb_autoinc_lock_mode=2
bind-address=0.0.0.0
[GLR2] ↴
$ vim /etc/my.cnf.d/server.cnf
[galera]
# Mandatory settings
wsrep_on=ON
wsrep_provider=/usr/lib64/galera/libgalera_smm.so
wsrep_cluster_address='gcomm://{GLR1 IP},{GLR2 IP},{GLR3 IP}'
wsrep_cluster_name='galera'
wsrep_node_address='{GLR2 IP}'
wsrep_node_name='GLR2'
wsrep_sst_method=rsync
binlog_format=row
default_storage_engine=InnoDB
innodb_autoinc_lock_mode=2
bind-address=0.0.0.0
[GLR3] ↴
$ vim /etc/my.cnf.d/server.cnf
[galera]
# Mandatory settings
wsrep_on=ON
wsrep_provider=/usr/lib64/galera/libgalera_smm.so
wsrep_cluster_address='gcomm://{GLR1 IP},{GLR2 IP},{GLR3 IP}'
wsrep_cluster_name='galera'
wsrep_node_address='{GLR3 IP}'
wsrep_node_name='GLR3'
wsrep_sst_method=rsync
binlog_format=row
default_storage_engine=InnoDB
innodb_autoinc_lock_mode=2
bind-address=0.0.0.0
STEP 4
Reboot all servers.
STEP 5
Use the following command on GLR1 only, and then restart mariadb.service on GLR2 and GLR3:
$ galera_new_cluster
$ sevice mysql start
STEP 6
As you noticed in your question, you can test connectivity between servers by using the following command:
$ mysql -u root -p -e "SHOW STATUS LIKE 'wsrep%'"
Or just check the cluster size:
$ mysql -u root -p -e "SHOW GLOBAL STATUS LIKE 'wsrep_cluster_size';"
STEP 7
On the other hand, after finishing all the steps above, you can use this Article provided by galeracluster website on how to a avoid a single-node failure causing the other to stop working if you wanna use a TWO-NODE cluster.
There are two solutions available to you:
- You can bootstrap the surviving node to form a new Primary Component, using the pc.boostrap wsrep Provider option. To do so, log into the database client and run the following command:
SET GLOBAL wsrep_provider_options='pc.bootstrap=YES';
This bootstraps the surviving node as a new Primary Component. When the other node comes back online or regains network connectivity with this node, it will initiate a state transfer and catch up with this node.
- In the event that you want the node to continue to operate, you can use the pc.ignore_sb wsrep Provider option. To do so, log into the database client and run the following command:
SET GLOBAL wsrep_provider_options='pc.ignore_sb=TRUE';
The node resumes processing updates and it will continue to do so, even in the event that it suspects a split-brain situation.
Note Warning: Enabling pc.ignore_sb is dangerous in a multi-master setup, due to the aforementioned risk for split-brain situations. However, it does simplify things in master-slave clusters, (especially in cases where you only use two nodes).
In addition to the solutions provided above, you can avoid the situation entirely using Galera Arbitrator. Galera Arbitrator functions as an odd node in quorum calculations. Meaning that, if you enable Galera Arbitrator on one node in a two-node cluster, that node remains the Primary Component, even if the other node fails or loses network connectivity.