-->

DEVOPSZONES

  • Recent blogs

    How to Configure NFS / NFS-Ganesha Server Clustering Using Pacemaker on CentOS7/RHEL7/Oracle Linux 7

     As most of already know about NFS (Network File System). An NFS server is used to share files over the Network. In this tutorial we will create HA cluster for NFS/NFS-Ganesha, so the single point of failure can be avioded.

    Pacemaker_cluster
    Pacemaker_cluster

    Components

    • Corosync provides clustering infrastructure to manage which nodes are involved, their communication, and quorum.
    • Pacemaker manages cluster resources and rules of their behavior.
    • Gluster is a scalable and distributed filesystem.
    • Ganesha is an NFS server that can use many different backing filesystem types, including Gluster.

    My Enviornment Details:

    Server : 2

    OS : Oracle linux 7

    Host Name : Node1, Node2

    Software: Corosync, Pacemaker, OCI CLI

    IP: 10.0.0.5/24, 10.0.0.6/24

    VIP: 10.0.0.100/24

    Objectives

    In this tutorial, you’ll learn to:

    • Create a Gluster volume
    • Configure Ganesha
    • Create a Cluster
    • Create Cluster services

    Prerequisites

    • Four Oracle Linux 7 instances installed with the following configuration:
      • a non-root user with sudo permissions (I've performed  this task with root user, you can use a non-root user for better security posture )
      • ssh keypair for the non-root user
      • ability to ssh from one host (node01) to the others (node02) using passwordless ssh login
      • additional block volume for use with gluster

    Corosync is an opensource cluster engine which communicates with multiple cluster nodes and updates the cluster information database. Pacemaker is an open-source high availability resource manager software used on computer clusters since 2004. 

    Step 1: Set the hostname of both the Servers

    # hostnamectl set-hostname "node1"
    # exec bash

    Update the /etc/hosts file on both the servers,

    10.0.0.5  node1
    10.0.0.6  node2

    Step 2: Update  both the Servers

    # yum update -y 
    # reboot

    Step 3: Install software on both the servers

    # yum install -y oracle-gluster-release-el7
    # yum-config-manager --enable ol7_addons ol7_latest ol7_optional_latest ol7_UEKR5
    #
    yum install -y corosync glusterfs-server nfs-ganesha-gluster pacemaker pcs fence-agents-all

    Create the Gluster volume

    You will prepare each VMs disk, create a replicated Gluster volume and activate the volume

    1. (On all servers) Create an XFS filesystem on /dev/sdb with a label of gluster-000
      # mkfs.xfs -f -i size=512 -L gluster-000 /dev/sdb
    2. (On all servers) Create a mountpoint, add an fstab(5) entry for a disk with the label glustervol1 and mount the filesystem
      # mkdir -p /data/glusterfs/sharedvol/mybrick
      # echo 'LABEL=gluster-000 /data/glusterfs/sharedvol/mybrick xfs defaults  0 0' >> /etc/fstab
      # mount /data/glusterfs/sharedvol/mybrick
    3. (On all servers) Enable and start the Gluster service
      # systemctl enable --now glusterd

    1. On node1: Create the Gluster environment by adding peers
      # gluster peer probe node2
      peer probe: success.
      
      # gluster peer status
      Number of Peers: 1
      
      Hostname: node2
      Uuid: 328b1652-c69a-46ee-b4e6-4290aef11043
      State: Peer in Cluster (Connected)
      

    Show that our peers have joined the environment

    On node2:

    # gluster peer status
    Number of Peers: 1
    
    
    Hostname: node1
    Uuid: ac64c0e3-02f6-4814-83ca-1983999c2bdc
    State: Peer in Cluster (Connected)


    1. On node1: Create a Gluster volume named sharedvol which is replicated across our three hosts: master1, master2 and master3.
      # gluster volume create sharedvol replica 2 node{1,2}:/data/glusterfs/sharedvol/mybrick/brick
      For more details on volume types see the Gluster: Setting up Volumes link in the additional information section of this page
    2. On master1: Enable our Gluster volume named sharedvol
      # gluster volume start sharedvol

    Our replicated Gluster volume is now available and can be verified from any master

    1. # gluster volume info
      Volume Name: sharedvol
      Type: Replicate
      Volume ID: 466a6c8e-7764-4c0f-bfe6-591cc6a570e8
      Status: Started
      Snapshot Count: 0
      Number of Bricks: 1 x 2 = 2
      Transport-type: tcp
      Bricks:
      Brick1: node1:/data/glusterfs/sharedvol/mybrick/brick
      Brick2: node2:/data/glusterfs/sharedvol/mybrick/brick
      Options Reconfigured:
      transport.address-family: inet
      nfs.disable: on
      performance.client-io-threads: off

    # gluster volume status
    Status of volume: sharedvol
    Gluster process                             TCP Port  RDMA Port  Online  Pid
    ------------------------------------------------------------------------------
    Brick nod1:/data/glusterfs/sharedvol/myb
    rick/brick                                  49152     0          Y       7098
    Brick node2:/data/glusterfs/sharedvol/myb
    rick/brick                                  49152     0          Y       6860
    Self-heal Daemon on localhost               N/A       N/A        Y       7448
    Self-heal Daemon on node2      N/A       N/A        Y       16839
    
    
    Task Status of Volume sharedvol
    ------------------------------------------------------------------------------
    There are no active volume tasks


    Step 4:  Either Disable Or Open Required Ports. For this lab i've disbled them.

    Disable SElinux and update your firewall rules or disable your firewall on both the servers.

    # sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux
    # reboot

     Disable Firewall on both the servers.

    [root@devopszones ~]#systemctl disable  firewalld


    If you want to keep the firewall running then add this service in firewall on both the servers.

    # firewall-cmd --permanent --add-service=high-availability
    # firewall-cmd --reload


    Step 5:   Now Enable and start pcsd service on both the servers

    # systemctl enable --now pcsd
    # systemctl start  pcsd

    pcsd_service_status
    pcsd_service_status


    Step 6:   Now Enable corosync and pacemaker service on both the servers

    # systemctl enable corosync
    # systemctl enable pacemaker

    Step 7:  Create  password for hacluster user on both the server and Authenticate nodes from Cluster

    [root@node1 ~]# echo "enter_password" | passwd --stdin hacluster
    [root@node2 ~]#echo "enter_password" | passwd --stdin hacluster

    Now authenticate the Cluster nodes, Run this command on Node1. 

    [root@node1 ~]# pcs cluster auth node1 node2 -u hacluster -p <password>
    [root@node1 ~]#

    You should see an authorized message.

    Step 8:  Now create a cluster named "HA-NFS"

    [root@node1 ~]# pcs cluster setup --name HA-NFS node1 node2
    [root@node1 ~]#

    Step 9:   Now Enable the cluster on all the nodes and start cluster.

    [root@node1 ~]# pcs cluster start --all
    [root@node1 ~]# pcs cluster enable --all

    Step 10:  Define Fencing device for your cluster.  For this lab i've disabled that.

    if you want it to disable aswell, then run following command.

    [root@node1 ~]#  pcs property set stonith-enabled=false

    If you want to enable the STONITH then follow this procedure. Fencing will help you in split brain condition. If any of the node goes faulty then fencing device will remove that node from the cluster. In Pacemaker fencing is defined using Stonith (Shoot The Other Node In The Head) resource.

    For a stonith device you need to use a shared disk. A 1 GB shared disk will be good enough. 

    Note down the Disk id of your shred device.

    [root@node1 ~]# ls -l /dev/disk/by-id/

    Now run below “pcs stonith” command from either of the node to create fencing device(disk_fencing)

    [root@node1~]# pcs stonith create disk_fencing fence_scsi \ 
    pcmk_host_list="node1 node2" \ 
    pcmk_monitor_action="metadata" pcmk_reboot_action="off" \ 
    devices="disk-id of shared disk" \ 
    meta provides="unfencing"
    [root@node1 ~]#

    Verify the status of stonith using below command,

    [root@node1 ~]# pcs stonith show

    Step 11:  Now Check the status of the cluster.

    [root@node1 devopszones]#pcs status
    Cluster name: HA-NFS
    Stack: corosync
    Current DC: node1 (version 1.1.23-1.0.1.el7_9.1-9acf116022) - partition with quorum
    Last updated: Tue Feb  1 10:03:55 2022
    Last change: Fri Dec 17 11:56:40 2021 by root via crm_resource on node1
    
    2 nodes configured
    
    Step 11: Now Configure NFS-Ganesha (On all Servers) 

    Create file called /etc/ganesha/ganesha.conf. Add Following entry into that. Change your Volume Name as per your use case.

    EXPORT{
        Export_Id = 1 ;       # Unique identifier for each EXPORT (share)
        Path = "/sharedvol";  # Export path of our NFS share
    FSAL { name = GLUSTER; # Backing type is Gluster hostname = "localhost"; # Hostname of Gluster server volume = "sharedvol"; # The name of our Gluster volume
    } Access_type = RW; # Export access permissions Squash = No_root_squash; # Control NFS root squashing Disable_ACL = FALSE; # Enable NFSv4 ACLs Pseudo = "/glustervol1"; # NFSv4 pseudo path for our NFS share Protocols = "3","4" ; # NFS protocols supported Transports = "UDP","TCP" ; # Transport protocols supported SecType = "sys"; # NFS Security flavors supported }
    Step 12: Now create  NFS-Ganesha and VIP ( Virtual IP) resource. Group the Resources.
    # pcs resource create nfs_server systemd:nfs-ganesha op monitor interval=10s
    # pcs resource create nfs_ip ocf:heartbeat:IPaddr2 ip=10.0.0.100 cidr_netmask=24 op monitor interval=10s
    # pcs resource group add nfs_group nfs_server nfs_ip

     Step 13: Now view and verify the cluster using pcs status.

    [root@node1 devopszones]#pcs status
    Cluster name: HA-NFS
    Stack: corosync
    Current DC: node1 (version 1.1.23-1.0.1.el7_9.1-9acf116022) - partition with quorum
    Last updated: Tue Feb  1 10:22:18 2022
    Last change: Fri Dec 17 11:56:40 2021 by root via crm_resource on node1
    
    2 nodes configured
    2 resource instances configured
    
    Online: [ node1  node2 ]
    
    Full list of resources:
    
     Resource Group: nfs_group
         nfs_server (systemd:nfs-ganesha):  Started node1
         nfs_ip     (ocf::heartbeat:IPaddr2):       Started node1
    Daemon Status:
      corosync: active/enabled
      pacemaker: active/enabled
      pcsd: active/enabled
    [root@node1 devopszones]#
    

    Step 14: Now Mount the NFS share in a client.

    On client1: Mount the NFS service provided by our cluster and create a file

    # yum install -y nfs-utils
    # mkdir /sharedvol
    # mount -t nfs nfs.vagrant.vm:/sharedvol /sharedvol
    # df -h /sharedvol/
    Filesystem                 Size  Used Avail Use% Mounted on
    10.0.0.100:/sharedvol   16G  192M   16G   2% /sharedvol
    # echo "Hello from OpenWorld" > /sharedvol/hello

    Thats it. Your NFS-Ganesha HA Cluster is ready.


    How to Enable Gluster encryption

    2 comments:

    1. Hi,

      I used to read articles and was able to set up things easily. And this is the only article which explains NFS Ganesha HA. But it lacks more clarity. I cannot able to perform the setup.

      1) Please list the requirements. As GlusterFS is mandatory, I see nowhere you have mentioned GlusterFS is required and volumes are created.

      2) It is clear up to Step 10. But for the following Steps, it is not clear

      a) Step 11: Do we need to configure the conf file in all the GlusterFS nodes where NFS Ganesha is installed?

      b) Step 12 to End

      I have set up NFS Server before and am also able to use GlusterFS using FUSE. But I could not understand these.

      1) What do you mean by VIP Resource?
      2) I don't see any where you have mentioned about the client who is going to make use of the volume. If we don't mention it, how the client can mount the volume?. Example: export file which we create in NFS

      Thanks,
      Siddharth

      ReplyDelete
      Replies
      1. Thanks for pointing out. I've added gluster volume related steps.

        Delete