-->
  • Recent Articles

    How to Configure NFS / NFS-Ganesha Server Clustering Using Pacemaker on CentOS7/RHEL7/Oracle Linux 7

     As most of already know about NFS (Network File System). An NFS server is used to share files over the Network. In this tutorial we will create HA cluster for NFS/NFS-Ganesha, so the single point of failure can be avioded.

    Pacemaker_cluster
    Pacemaker_cluster


    My Enviornment Details:

    Server : 2

    OS : Oracle linux 7

    Host Name : Node1, Node2

    Software: Corosync, Pacemaker, OCI CLI

    IP: 10.0.0.5/24, 10.0.0.6/24

    VIP: 10.0.0.100/24


    Corosync is an opensource cluster engine which communicates with multiple cluster nodes and updates the cluster information database. Pacemaker is an open-source high availability resource manager software used on computer clusters since 2004. 

    Step 1: Set the hostname of both the Servers

    # hostnamectl set-hostname "node1"
    # exec bash

    Update the /etc/hosts file on both the servers,

    10.0.0.5  node1
    10.0.0.6  node2

    Step 2: Update  both the Servers

    # yum update -y 
    # reboot

    Step 3: Install Corosync,Pacemaker and NFS-ganesha Packages on both the servers

    [root@node1 ~]# yum install -y corosync nfs-ganesha-gluster pacemaker pcs fence-agents-all
    [root@node2 ~]# yum install -y corosync nfs-ganesha-gluster pacemaker pcs fence-agents-all


    Step 4:  Either Disable Or Open Required Ports. For this lab i've disbled them.

    Disable SElinux and update your firewall rules or disable your firewall on both the servers.

    # sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux
    # reboot

     Disable Firewall on both the servers.

    [root@devopszones ~]#systemctl disable  firewalld


    If you want to keep the firewall running then add this service in firewall on both the servers.

    # firewall-cmd --permanent --add-service=high-availability
    # firewall-cmd --reload


    Step 5:   Now Enable and start pcsd service on both the servers

    # systemctl enable --now pcsd
    # systemctl start  pcsd

    pcsd_service_status
    pcsd_service_status


    Step 6:   Now Enable corosync and pacemaker service on both the servers

    # systemctl enable corosync
    # systemctl enable pacemaker

    Step 7:  Create  password for hacluster user on both the server and Authenticate nodes from Cluster

    [root@node1 ~]# echo "enter_password" | passwd --stdin hacluster
    [root@node2 ~]#echo "enter_password" | passwd --stdin hacluster

    Now authenticate the Cluster nodes, Run this command on Node1. 

    [root@node1 ~]# pcs cluster auth node1 node2 -u hacluster -p <password>
    [root@node1 ~]#

    You should see an authorized message.

    Step 8:  Now create a cluster named "HA-NFS"

    [root@node1 ~]# pcs cluster setup --name HA-NFS node1 node2
    [root@node1 ~]#

    Step 9:   Now Enable the cluster on all the nodes and start cluster.

    [root@node1 ~]# pcs cluster start --all
    [root@node1 ~]# pcs cluster enable --all

    Step 10:  Define Fencing device for your cluster.  For this lab i've disabled that.

    if you want it to disable aswell, then run following command.

    [root@node1 ~]#  pcs property set stonith-enabled=false

    If you want to enable the STONITH then follow this procedure. Fencing will help you in split brain condition. If any of the node goes faulty then fencing device will remove that node from the cluster. In Pacemaker fencing is defined using Stonith (Shoot The Other Node In The Head) resource.

    For a stonith device you need to use a shared disk. A 1 GB shared disk will be good enough. 

    Note down the Disk id of your shred device.

    [root@node1 ~]# ls -l /dev/disk/by-id/

    Now run below “pcs stonith” command from either of the node to create fencing device(disk_fencing)

    [root@node1~]# pcs stonith create disk_fencing fence_scsi \ 
    pcmk_host_list="node1 node2" \ 
    pcmk_monitor_action="metadata" pcmk_reboot_action="off" \ 
    devices="disk-id of shared disk" \ 
    meta provides="unfencing"
    [root@node1 ~]#

    Verify the status of stonith using below command,

    [root@node1 ~]# pcs stonith show

    Step 11:  Now Check the status of the cluster.

    [root@node1 devopszones]#pcs status
    Cluster name: HA-NFS
    Stack: corosync
    Current DC: node1 (version 1.1.23-1.0.1.el7_9.1-9acf116022) - partition with quorum
    Last updated: Tue Feb  1 10:03:55 2022
    Last change: Fri Dec 17 11:56:40 2021 by root via crm_resource on node1
    
    2 nodes configured
    
    Step 11: Now Configure NFS-Ganesha

    Create file called /etc/ganesha/ganesha.conf. Add Following entry into that. Change your Volume Name as per your use case.

    EXPORT{
        Export_Id = 1 ;       # Unique identifier for each EXPORT (share)
        Path = "/glustervol1";  # Export path of our NFS share
    
        FSAL {
            name = GLUSTER;          # Backing type is Gluster
            hostname = "localhost";  # Hostname of Gluster server
            volume = "glustervol1";    # The name of our Gluster volume
        }
    
        Access_type = RW;          # Export access permissions
        Squash = No_root_squash;   # Control NFS root squashing
        Disable_ACL = FALSE;       # Enable NFSv4 ACLs
        Pseudo = "/glustervol1";     # NFSv4 pseudo path for our NFS share
        Protocols = "3","4" ;      # NFS protocols supported
        Transports = "UDP","TCP" ; # Transport protocols supported
        SecType = "sys";           # NFS Security flavors supported
    }
    Step 12: Now create  NFS-Ganesha and VIP resource. Group the Resources.
    # pcs resource create nfs_server systemd:nfs-ganesha op monitor interval=10s
    # pcs resource create nfs_ip ocf:heartbeat:IPaddr2 ip=10.0.0.100 cidr_netmask=24 op monitor interval=10s
    # pcs resource group add nfs_group nfs_server nfs_ip

     Step 13: Now view and verify the cluster using pcs status.

    [root@node1 devopszones]#pcs status
    Cluster name: HA-NFS
    Stack: corosync
    Current DC: node1 (version 1.1.23-1.0.1.el7_9.1-9acf116022) - partition with quorum
    Last updated: Tue Feb  1 10:22:18 2022
    Last change: Fri Dec 17 11:56:40 2021 by root via crm_resource on node1
    
    2 nodes configured
    2 resource instances configured
    
    Online: [ node1  node2 ]
    
    Full list of resources:
    
     Resource Group: nfs_group
         nfs_server (systemd:nfs-ganesha):  Started node1
         nfs_ip     (ocf::heartbeat:IPaddr2):       Started node1
    Daemon Status:
      corosync: active/enabled
      pacemaker: active/enabled
      pcsd: active/enabled
    [root@node1 devopszones]#
    

    Step 14: Now Mount the NFS share.

    [root@localhost ~]# mount 10.0.0.100:/glustervol1 /mnt/

    Thats it. Your NFS-Ganesha HA Cluster is ready.

    No comments