Background:
Clustering is used to achieve High Aaailability(HA) in many forms and is widely used in database technologies to meet the service uptime and to load balance clients requests. We will see how to configure a Opensource Galera cluster on MariaDB database. This documents assume that you are familiar with virtualization technologies especially VirtualBox, you are able to create Virtual Machines, Install OS, Configure networks, etc. in VirtualBox.
Requirements:
- Virtual Machine: Three (3)
- Memory or RAM: 2500 MB each vm
- Storage: 10 GB each vm
- Network: Three (3); NAT, Internal Network, Host Only Network
- MariaDB 10.4.12
- Galera 4 Cluster
Let’s dive in:
-
Step-1: Create a Virtual Machine named as galera1 with VirtualBox with following specifications:
-
- 2 vCPU
- 2500 MB memory or RAM
- 10 GB storage
- Three network card; NAT, Internal network, Host only network
-
Step-2: Install the Centos 7 64 bit on the virtual machine.
-
Step-3: Configure the network if needed, it should be configured during installation.
-
Step-4: Connect to the Virtual Machine by Host only network IP from any SSH client.
-
Step-5: Shutdown the Virtual Machine
-
Step-6: Full Clone the Virtual Machine to create a second VM named as galera2 and a third VM named as galera3.
-
Step-7: Setting hostname to the VMs:
- Connect to galera1 VM as root user and execute: hostnamectl set-hostname galera1
- Connect to galera2 VM as root user and execute: hostnamectl set-hostname galera2
- Connect to galera3 VM as root user and execute: hostnamectl set-hostname galera3
-
Step-8: Set the Host only network Ips in the VMs, you will connect to cluster nodes by SSH client through these Ips:
- Connect as root and using nmtui utility set static Ip for galera1 : 192.168.56.107
- Connect as root and using nmtui utility set static Ip for galera2 : 192.168.56.108
- Connect as root and using nmtui utility set static Ip for galera3 : 192.168.56.109
-
Step-9: Set the Internal network Ips of VMs which will act as private networks named as privnet, cluster nodes will communicate through these privnet Ips:
- Connect as root and using nmtui utility set static Ip for galera1 : 172.16.30.101
- Connect as root and using nmtui utility set static Ip for galera2 : 172.16.30.102
- Connect as root and using nmtui utility set static Ip for galera3 : 172.16.30.103
-
Step-10: Make the /etc/hosts entries to all three nodes:
-
-
- 192.168.56.107 galera1
- 172.16.30.101 galera1-privnet
- 192.168.56.108 galera2
- 172.16.30.102 galera2-privnet
- 192.168.56.109 galera3
- 172.16.30.103 galera3-privnet
-
-
Step-11: Adding the MariaDB repositories in all three nodes frm root user, galera1, galera2, galera3:
- Create the repository file by:
- sudo vi /etc/yum.repos.d/mariadb.repo
- Create the repository file by:
-
- Add the following contents to the file:
- [mariadb]
name = MariaDB
baseurl = http://yum.mariadb.org/10.4/centos7-amd64
gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
gpgcheck=1
- [mariadb]
- Add the following contents to the file:
-
- Enable the created repository:
- sudo yum makecache –disablerepo=’*’ –enablerepo=’mariadb’
- Enable the created repository:
-
Step-12: Installing MariaDB on all servers one by one, galera1, galera2, galera3:
- Beginning with version 10.1 Galera Cluster is includeed by default with MariaDB server package.
- sudo yum install MariaDB-server MariaDB-client -y
- Beginning with version 10.1 Galera Cluster is includeed by default with MariaDB server package.
-
- Start the mariadb service:
- sudo systemctl start mariadb
- Start the mariadb service:
-
- Enable the mariadb service to be automatically started:
- sudo systemctl enable mariadb
- Enable the mariadb service to be automatically started:
-
- From 10.4 MariaDB root has no default password after installation, set root password:
- sudo mysql -uroot
- set password = password(“your_password”);
- From 10.4 MariaDB root has no default password after installation, set root password:
-
- Exit from MariaDB shell:
- Quit;
- Exit from MariaDB shell:
-
- Install rsync and policycoreutils-python:
- sudo yum install rsync policycoreutils-python -y
- Install rsync and policycoreutils-python:
-
Step-13: Configure the first node galera1:
- Cluster configuration should be started from the first node galera1 then will copy this to other nodes galera2 and glaera3.
- By default, MariaDB is configured to check the /etc/mycnf.d directory to get additional configuration settings from files ending in .cnf.
- [root@galera1 ~]# sudo vi /etc/my.cnf.d/galera.cnf
-
-
- Make the following entries to the file:
-
[mysqld]
binlog_format=ROW
default-storage-engine=innodb
innodb_autoinc_lock_mode=2
bind-address=0.0.0.0# Galera Provider Configuration
wsrep_on=ON
wsrep_provider=/usr/lib64/galera-4/libgalera_smm.so# Galera Cluster Configuration
wsrep_cluster_name=”my_cluster”
wsrep_cluster_address=”gcomm://172.16.30.101,172.16.30.102,172.16.30.103″# Galera Synchronization Configuration
wsrep_sst_method=rsync# Galera Node Configuration
wsrep_node_address=”172.16.30.101″
wsrep_node_name=”galera1″
-
-
Step-14:Configuring the second node, galera2:
- Copy the /etc/my.cnf.d/galera.cnf file to the second node from galera1 node:
- [root@galera1 ~]# scp /etc/my.cnf.d/galera.cnf root@galera2:/etc/my.cnf.d/galera.cnf
- Copy the /etc/my.cnf.d/galera.cnf file to the second node from galera1 node:
-
- Edit the “Galera Node Configuration” section at the end of the file to reflect the node galera2:
- [root@galera2 ~]# sudo vi /etc/my.cnf.d/galera.cnf
- # Galera Node Configuration
- wsrep_node_address=”172.16.30.102″
- wsrep_node_name=”galera2″
- Edit the “Galera Node Configuration” section at the end of the file to reflect the node galera2:
-
Step-15: Configuring the third node, galera3:
- Copy the /etc/my.cnf.d/galera.cnf file to the third node from galera1 node:
- [root@galera1 ~]# scp /etc/my.cnf.d/galera.cnf root@galera3:/etc/my.cnf.d/galera.cnf
- Copy the /etc/my.cnf.d/galera.cnf file to the third node from galera1 node:
-
- Edit the “Galera Node Configuration” section at the end of the file to reflect the node galera3:
- [root@galera2 ~]# sudo vi /etc/my.cnf.d/galera.cnf
- Edit the “Galera Node Configuration” section at the end of the file to reflect the node galera3:
-
-
- # Galera Node Configuration
- wsrep_node_address=”172.16.30.103″
- wsrep_node_name=”galera3″
-
-
Step-16: Allow the mariaDB ports so that cluster nodes can communicate with each other, it should be done on all three nodes galera1, galera2, galera3.
- Check the status of the firewall: usually you could find services like ssh and dhcp in the service list
- sudo firewall-cmd –list-all
- Allow the below ports to MariaDB:
- sudo firewall-cmd –permanent –zone=public –add-port=3306/tcp
- sudo firewall-cmd –permanent –zone=public –add-port=4567/tcp
- sudo firewall-cmd –permanent –zone=public –add-port=4568/tcp
- sudo firewall-cmd –permanent –zone=public –add-port=4444/tcp
- sudo firewall-cmd –permanent –zone=public –add-port=4567/udp
- Add each server to the firewall public zone by executing the following commands:
- sudo firewall-cmd –permanent –zone=public –add-source=172.16.30.101/32
- sudo firewall-cmd –permanent –zone=public –add-source=172.16.30.102/32
- sudo firewall-cmd –permanent –zone=public –add-source=172.16.30.103/32
- Restart the firewall to effect the new rules on all three nodes:
- sudo firewall-cmd –reload
- Check the status of the firewall: usually you could find services like ssh and dhcp in the service list
-
Step-17: Disable SELINUX on all nodes:
- vi /etc/selinux/config
- SELINUX=diabled
-
Step-18: Starting the galera cluster:
- Stop all mariadb services on each nodes by maintaining the order galera3, galera2, galera1 accordingly.
-
-
- [root@galera3 ~]# sudo systemctl stop mariadb
- [root@galera2 ~]# sudo systemctl stop mariadb
-
-
-
- [root@galera1 ~]# sudo systemctl stop mariadb
-
-
- Make sure all the nodes mariadb services are down by executing the following on each node:
- sudo systemctl status mariadb
- Make sure all the nodes mariadb services are down by executing the following on each node:
-
- Bringing up the first node galera1:
- [root@galera1 ~]# sudo galera_new_cluster
- ***if not started automatically see the systemctl status mariadb.service to find any error stated, in my case grastate.dat settings safe_to_bootstrap: 1 solved this problem.
- Bringing up the first node galera1:
-
- Run the following command to see the node registered as cluster node:
- [root@galera1 ~]# mysql -u root -p -e “SHOW STATUS LIKE ‘wsrep_cluster_size'”
- Run the following command to see the node registered as cluster node:
-
-
-
- Enter password:
- +——————–+——-+
- | Variable_name | Value |
- +——————–+——-+
- | wsrep_cluster_size | 1 |
- +——————–+——-+
-
-
-
- On remaining node start the mariaDB and they will join the new cluster.
- Start mariadb service on galera2 it will automatically join the cluster:
- On remaining node start the mariaDB and they will join the new cluster.
-
-
- [root@galera2 ~]# sudo systemctl start mariadb
- [root@galera2 ~]# mysql -u root -p -e “SHOW STATUS LIKE ‘wsrep_cluster_size'”
- Enter password:
- +——————–+——-+
- | Variable_name | Value |
- +——————–+——-+
- | wsrep_cluster_size | 2 |
- +——————–+——-+
- Start mariadb service on galera3 it will automatically join the cluster:
-
-
-
- [root@galera3 ~]# sudo systemctl start mariadb
- [root@galera3 ~]# mysql -u root -p -e “SHOW STATUS LIKE ‘wsrep_cluster_size'”
- Enter password:
- +——————–+——-+
- | Variable_name | Value |
- +——————–+——-+
- | wsrep_cluster_size | 3 |
- +——————–+——-+
-
-
Step-19: Testing the replication.
- On first node galera1 create a database named as cluster_test:
-
-
- [root@galera1 ~]# mysql -u root -p -e ‘CREATE DATABASE cluster_test;’
-
-
- On galera2 node and galera3 node check to see the database created on galera1 node is reflected:
- [root@galera2 ~]# mysql -uroot -p -e ‘show databases;’
- Enter password:
- +——————–+
- | Database |
- +——————–+
- | cluster_test |
- | information_schema |
- | mysql |
- | performance_schema |
- | test |
- +——————–+
- [root@galera3 ~]# mysql -uroot -p -e ‘show databases;’
- Enter password:
- +——————–+
- | Database |
- +——————–+
- | cluster_test |
- | information_schema |
- | mysql |
- | performance_schema |
- | test |
- +——————–+
- [root@galera2 ~]# mysql -uroot -p -e ‘show databases;’
- On galera2 node and galera3 node check to see the database created on galera1 node is reflected:
-
- Do some DML like insert and select from any node and check to see the reflection on another nodes:
- Create a table on galera1 and insert a row:
- [root@galera1 ~]# mysql -u root -p -e ‘CREATE TABLE cluster_test.dmltest ( id INT NOT NULL AUTO_INCREMENT, color VARCHAR(25), PRIMARY KEY(id));’
- [root@galera1 ~]# mysql -u root -p -e ‘INSERT INTO cluster_test.dmltest (color) VALUES (“red”);’
- Check the table created on other nodes and the row inserted
- [root@galera2 ~]# mysql -u root -p -e ‘SELECT * FROM cluster_test.dmltest;’
- Enter password:
- +—-+——-+
- | id | color |
- +—-+——-+
- | 1 | red |
- +—-+——-+
- [root@galera3 ~]# mysql -u root -p -e ‘SELECT * FROM cluster_test.dmltest;’
- Enter password:
- +—-+——-+
- | id | color |
- +—-+——-+
- | 1 | red |
- +—-+——-+
- Insert a row into galera2 and see the reflection on galera1 and galera3:
- [root@galera2 ~]# mysql -u root -p -e ‘INSERT INTO cluster_test.dmltest (color) VALUES (“green”);’
- [root@galera1 ~]# mysql -u root -p -e ‘SELECT * FROM cluster_test.dmltest;’
- Enter password:
- +—-+——-+
- | id | color |
- +—-+——-+
- | 1 | red |
- | 5 | green |
- +—-+——-+
- [root@galera3 ~]# mysql -u root -p -e ‘SELECT * FROM cluster_test.dmltest;’
- Enter password:
- +—-+——-+
- | id | color |
- +—-+——-+
- | 1 | red |
- | 5 | green |
- +—-+——-+
- Insert a row in the galera3 and see the reflection on galera1 and galera2:
- [root@galera3 ~]# mysql -u root -p -e ‘INSERT INTO cluster_test.dmltest (color) VALUES (“blue”);’
- [root@galera1 ~]# mysql -u root -p -e ‘SELECT * FROM cluster_test.dmltest;’
- Enter password:
- +—-+——-+
- | id | color |
- +—-+——-+
- | 1 | red |
- | 5 | green |
- | 6 | blue |
- +—-+——-+
- [root@galera2 ~]# mysql -u root -p -e ‘SELECT * FROM cluster_test.dmltest;’
- Enter password:
- +—-+——-+
- | id | color |
- +—-+——-+
- | 1 | red |
- | 5 | green |
- | 6 | blue |
- +—-+——-+
- Test an UPDATE operaion on galera1 and the reflection on galera2, galera3:
- [root@galera1 ~]# mysql -u root -p -e ‘UPDATE cluster_test.dmltest SET color=”yellow” WHERE color=”green”;’
- [root@galera2 ~]# mysql -u root -p -e ‘SELECT * FROM cluster_test.dmltest;’
- Enter password:
- +—-+——–+
- | id | color |
- +—-+——–+
- | 1 | red |
- | 5 | yellow |
- | 6 | blue |
- +—-+——–+
- [root@galera3 ~]# mysql -u root -p -e ‘SELECT * FROM cluster_test.dmltest;’
- Enter password:
- +—-+——–+
- | id | color |
- +—-+——–+
- | 1 | red |
- | 5 | yellow |
- | 6 | blue |
- +—-+——–+
Kudos!!! You configured the cluster successfully.