How to Create Red Hat HA Cluster Part -II
In the previous post, we prepared cluster environment such as cluster nodes(2), disks(3) network interfaces on each node(2) and installed necessary packages on the nodes to build our cluster. In this part, we start building our cluster environment. Creating Quorum Disk We need to initialize one of our three disks as a quorum disk which is /dev/sdb in this case. As it was stated on the previous part all disks are the shared, you can initialize the disk any of the one cluster node.
[root@node01 ~]# mkqdisk -c /dev/sdb -l qdisk
mkqdisk v3.0.12.1
Writing new quorum disk label 'qdisk' to /dev/sdb.
WARNING: About to destroy all data on /dev/sdb; proceed [N/y] ? y
Initializing status block for node 1...
Initializing status block for node 2...
Initializing status block for node 3...
Initializing status block for node 4...
Initializing status block for node 5...
Initializing status block for node 6...
Initializing status block for node 7...
Initializing status block for node 8...
Initializing status block for node 9...
Initializing status block for node 10...
Initializing status block for node 11...
Initializing status block for node 12...
Initializing status block for node 13...
Initializing status block for node 14...
Initializing status block for node 15...
Initializing status block for node 16...
You can check quorum disk mkqdisk -L command. We labeled our quorum disk as qdisk, in which will be used on the cluster configuration forthcoming post.
[root@node01 ~]# mkqdisk -L
mkqdisk v3.0.12.1
/dev/block/8:16:
/dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-0:
/dev/disk/by-path/pci-0000:00:09.0-virtio-pci-virtio2-scsi-0:0:0:0:
/dev/sdb:
Magic: eb7a62c2
Label: qdisk
Created: Wed Dec 6 14:00:51 2017
Host: node01.cls.local
Kernel Sector Size: 512
Recorded Sector Size: 512
Creating a Cluster Environment. In this section we start building our cluster configuration, by giving a name to our cluster cluster, configuring fencing, configuring quorum disk etc. In order to configure cluster we will use ccs(Cluster Configuration System). Before using ccs we need to set password for ricci user and start ricci service on all cluster nodes.
#Set password for the ricci user on all nodes.
passwd ricci
Starting ricci Service.
#Start ricci service on all nodes. And enable service to start on boot time.
service ricci start
chkconfig ricci start
Giving a name to your cluster. (ankara-cluster)
[root@node01 ~]# ccs -h localhost --createcluster ankara-cluster
Adding nodes to the cluster.
[root@node01 ~]# ccs -h localhost --addnode node01-hb.cls.local
Node node01-hb.cls.local added.
[root@node01 ~]# ccs -h localhost --addnode node02-hb.cls.local
Node node02-hb.cls.local added.
Checking config
ccs -h localhost --getconf
So far, all cluster configuration keep in the first node as we issued commands on the first cluster node. In order to push it to the other nodes. we need to sync cluster configurations.
[root@node01 ~]# ccs -h localhost --sync --activate
Configuring Fencing: This part was the hardest part for me as I was using virtual guests as a cluster node. There are some fencing agents on the Internet for Virtual guests, but none of them worked for me. After quite of searching I configured fencing with fence_xvm. There are plenty of fencing agents you can use in compliance with your infrastructure.
[root@node01 ~]# ccs -h localhost --lsfenceopts
To see specific agent options you can issue below commands. I will use fence_xvm for our cluster environment.
[root@node01 ~]# ccs -h localhost --lsfenceopts fence_xvm
fence_xvm - Fence agent for virtual machines
Required Options:
Optional Options:
option: No description available
debug: Specify (stdin) or increment (command line) debug level
ip_family: IP Family ([auto], ipv4, ipv6)
multicast_address: Multicast address (default=225.0.0.12 / ff05::3:1)
ipport: Multicast or VMChannel IP port (default=1229)
retrans: Multicast retransmit time (in 1/10sec; default=20)
auth: Authentication (none, sha1, [sha256], sha512)
hash: Packet hash strength (none, sha1, [sha256], sha512)
key_file: Shared key file (default=/etc/cluster/fence_xvm.key)
port: Virtual Machine (domain name) to fence
use_uuid: Treat [domain] as UUID instead of domain name. This is provided for compatibility with older fence_xvmd installations.
action: Fencing action (null, off, on, [reboot], status, list, monitor, metadata)
timeout: Fencing timeout (in seconds; default=30)
delay: Fencing delay (in seconds; default=0)
domain: Virtual Machine (domain name) to fence (deprecated; use port)
In order to configure fence_xvm we need some configuration both on the Virtual Host(Physical Machine) and Virtual guests(cluster nodes).
# Install below packages on the KVM host which is RHEL7.3
yum install fence-virt fence-virtd fence-virtd-multicast fence-virtd-libvirt
Configuring the Firewall(on the KVM host)
firewall-cmd --permanent --zone=trusted --change-interface=virbr0
firewall-cmd --reload
Or
[root@kvmhost ~]# firewall-cmd --permanent --add-rich-rule='rule family="ipv4" source address="192.168.122.100" port port="1229" protocol="tcp" accept'
success
[root@kvmhost ~]# firewall-cmd --permanent --add-rich-rule='rule family="ipv4" source address="192.168.122.100" port port="1229" protocol="udp" accept'
[root@kvmhost ~]# firewall-cmd --reload
Create random shared key on the KVM host.
#Create a random shared key:
mkdir -p /etc/cluster
touch /etc/cluster/fence_xvm.key
chmod 0600 /etc/cluster/fence_xvm.key
dd if=/dev/urandom bs=512 count=1 of=/etc/cluster/fence_xvm.key
Configure fence by issuing fence_virtd -c
#On the KVM host
fence_virtd -c
Enable and start service
#On the KVM host
systemctl enable fence_virtd
systemctl start fence_virtd
Configuration after fence_virtd -c.(on the KVM host)
[root@kvmhost ~]# cat /etc/fence_virt.conf
backends {
libvirt {
uri = "qemu:///system";
}
}
listeners {
multicast {
port = "1229";
family = "ipv4";
interface = "virbr0";
address = "225.0.0.12";
key_file = "/etc/cluster/fence_xvm.key";
}
}
fence_virtd {
module_path = "/usr/lib64/fence-virt";
backend = "libvirt";
listener = "multicast";
}
Testing on the KVM host
[root@kvmhost ~]# fence_xvm -o list
node01 60b3d846-8508-47d3-90e3-a3d5702ef523 on
node02 68d3554f-f2fc-4a1e-8a1a-4e1a46987700 on
Configuring Virtual Guests(Cluster nodes) Install below packages on each cluster node.
yum install fence-virt fence-virtd
Copy the shared key from KVM host to Virtual guests directory /etc/cluster/fence_xvm.key. clear iptables copy fence_xvm.key from kvm host to virtual guest’s(/etc/cluster) folder.(Create this folder if it does not exist)
Testing Issue below commands on all virtual guests if fencing configured correctly(We have not configured it yet on the cluster)
[root@node01 cluster]# fence_xvm -o list
node01 60b3d846-8508-47d3-90e3-a3d5702ef523 on
node02 68d3554f-f2fc-4a1e-8a1a-4e1a46987700 on
Reboot and Power-off test. (On the KVM host or on the cluster nodes)
#On the on of kvm host
fence_xvm -o reboot -H "23edf335-a80a-4a7c-b6c9-4ad5ad79e02d"
fence_xvm -o off -H "23edf335-a80a-4a7c-b6c9-4ad5ad79e02d"
Configuring Fencing on the Cluster Environment After configuring our KVM host and virtual guest for fencing successfully, we can configure our fencing for our cluster.
Adding Fence Device
[root@node01 ~]# ccs -h localhost --addfencedev FDEV_XVM1 agent=fence_xvm
[root@node01 ~]# ccs -h localhost --addfencedev FDEV_XVM2 agent=fence_xvm
Adding Method
[root@node01 ~]# ccs -h localhost --addmethod FMET_XVM node01-hb.cls.local
[root@node01 ~]# ccs -h localhost --addmethod FMET_XVM node01-hb.cls.local
[root@node01 ~]# ccs -h localhost --addfenceinst FDEV_XVM1 node01-hb.cls.local FMET_XVM domain=node01
[root@node01 ~]# ccs -h localhost --addfenceinst FDEV_XVM2 node02-hb.cls.local FMET_XVM domain=node02
Sync the configuration
[root@node01 ~]# ccs -h localhost --sync --activate
Testing fencing(node02 will be fenced(rebooted) by the node01
[root@node01 cluster]# fence_node node02-hb.cls.local
fence node02-hb.cls.local success
In the next post we will create a failover-domain, resource, and service groups. At the and we will have a fully functional HA cluster environment. For the first part of the tutorial.