Contents

How to Create Red Hat HA Cluster Part -I

Contents

##This post will actually consists of couple of parts. At the end of the complete part of the series,  we will have  Red Hat HA Cluster that runs Apache and NFS . In this tutorial I will be using two CentOS6 hosts, named node01 and node02 on the KVM host. Each node has two interfaces one for the giving users access to the services and the other one for the heart-beat. Topology: It is depicted in the Figure-1, it is two-node cluster with shared disks and fencing agent. Each node in the cluster has below hosts entry. ##```shell #/etc/hosts entry on each node. 192.168.122.100 node01.cls.local 192.168.123.100 node01-hb.cls.local 192.168.122.200 node02.cls.local 192.168.123.200 node02-hb.cls.local


![Image](/natro/clustering1.png) 

**Figure-1: Simple two-node cluster** 

As it is depicted on the Figure-1,  there are three shareable storages created  on the KVM host. If you want, you can also use iSCSI disk. It is two-node cluster. As you know it is special cluster type as it is highly susceptible to split-brain circumstances, in case of network corruption between heart-beat. In order to prevent this, It is set up one disk as quorum. Minimum size for a quorum disk is 10MB. 512MB is used for this topology. Beside quorum disk we have two separate shared disks, in which 1GB size. One disk will be used for Apache DocumentRoot and other disk will be used for the nfs export. 

**Fencing:** 
In order to proper cluster configuration we also need to configure fencing. Fencing is a disconnection of a node from the shared storage. If communication with a single node in the cluster fails, then other nodes in the cluster must be restrict or release access to resources that the failed cluster node may have access to. This may not possible by connecting failed node as it may be unresponsive, so that it needs to be disconnected external way. It is accomplished by fence agent. A fence device is an external device that can be used by the cluster to restrict access to shared resources by an errant node, or to issue a hard reboot on the cluster node. 
**Installing Cluster Software on each node:** 
In order to configure cluster on the nodes, we need to install some software packages on each node.
```shell

[[email protected] ~]# yum install rgmanager lvm2-cluster gfs2-utils ccs

Firewall: Firewall service is disabled for this case on each node for the ease of configuration. But it is not acceptable for the production environment. In order for configuring cluster services properly, Cluster services on each node, must be communicate each other properly. Because of this, adding correct firewall rules are vital. You can use the following filtering to allow multicast traffic through the iptables firewall for the various cluster components. For openais, use the following filtering. Port 5405 is used to receive multicast traffic.

#For cman

iptables -I INPUT -p udp -m state --state NEW -m multiport --dports 5404,5405 -j ACCEPT

#For ricci:

iptables -I INPUT -p tcp -m state --state NEW -m multiport --dports 11111 -j ACCEPT

#For modcluster:

iptables -I INPUT -p tcp -m state --state NEW -m multiport --dports 16851 -j ACCEPT

#For gnbd:

iptables -I INPUT -p tcp -m state --state NEW -m multiport --dports 14567 -j ACCEPT

#For luci:

iptables -I INPUT -p tcp -m state --state NEW -m multiport --dports 8084 -j ACCEPT

#For DLM:

iptables -I INPUT -p tcp -m state --state NEW -m multiport --dports 21064 -j ACCEPT

#For ccsd:

iptables -I INPUT -p udp -m state --state NEW -m multiport --dports 50007 -j ACCEPT
iptables -I INPUT -p tcp -m state --state NEW -m multiport --dports 50008 -j ACCEPT


#After apply the rules above, we need to save firewall and restart the firewall service.

service iptables save ; service iptables restart

This is the first part of series of post about Red Hat HA. Next part , we will initialize quorum disk and configure the fence agent and build our cluster environment.