-->

DEVOPSZONES

  • Recent blogs

    How to configure Automatic Virtual IP (VIP) Failover on Oracle Cloud Infrastructure

     How to configure Automatic Virtual IP (VIP) Failover on Oracle Cloud Infrastructure

    When you are Configuring a highly available service through Clustering sotware, Automatic Virtual IP (VIP) Failover is a point you can not ignore. This article explains how to automate the Virtual IP (VIP) failover process on Oracle Cloud by integrating OCI CLI with Pacemaker and Corosync.

    Corosync is an opensource cluster engine which communicates with multiple cluster nodes and updates the cluster information database. Pacemaker is an open-source high availability resource manager software used on computer clusters since 2004. 

    Corosync/Pacemaker has the ability to mange high availability for a virtual IP on the hosts level, which mean the IP can automatically move from one host to another in case of failure, through the problem is, this move on the host will not be updated on your cloud VCN (Virtual Cloud Network), so the IP will not be accessible even if it is up on the nodes until it is also updated on the VCN resources or specifically in the attached VNIC section of compute Instance. We will use OCI CLI to automate this process.

    We will need to write a script to update the secondary IP in the compute Instance to make the virtual IP live active , then we will integrate that script with the native Corosync/Pacemaker script that manage the cluster resources.

    corosync-haserver_cluster
    Pacemaker_cluster

    My Enviornment Details:

    Server : 2

    OS : Oracle linux 7

    Host Name : Node1, Node2

    Software: Corosync, Pacemaker, OCI CLI

    IP: 10.0.0.5/24, 10.0.0.6/24

    VIP: 10.0.0.100/24

    Procedure:

    Step 1 :  Install and configure: Pacemaker & Corosync: https://www.devopszones.com/2022/02/how-to-configure-nfs-nfs-ganesha-server.html

    Step 2:  Install and configure OCI CLI: https://www.devopszones.com/2021/05/how-to-install-oci-cli.html

    Step 3Collect VNICs OCIDs using OCI console.

    Step 4Create OCI secondary IP assignment script.

    #!/bin/sh # Manas Tripathy <manas.tri@gmail.com> ##### OCI vNIC variables server="`hostname -s`" node1vnic="ocid1.vnic.oc1.iad.abct22xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" node2vnic="ocid1.vnic.oc1.iad.abctyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy" vnicip="10.0.0.100" export LC_ALL=C.UTF-8 export LANG=C.UTF-8 touch /tmp/vip-plumb.log if [ $server = "node1" ]; then /bin/oci network vnic assign-private-ip --unassign-if-already-assigned --vnic-id $node1vnic --ip-address $vnicip --hostname-label gfsvip --auth instance_principal > /tmp/vip-plumb.log 2>&1 else /bin/oci network vnic assign-private-ip --unassign-if-already-assigned --vnic-id $node2vnic --ip-address $vnicip --hostname-label gfsvip --auth instance_principal > /tmp/vip-plumb.log 2>&1 fi

     

    Update the node1vnic , node2vnic, vnicip with your own values. I've used instance principal as authentication for OCI. You can use it or OCI CLI config for authentication.

    Step 5 Ensure to add parameter LANG & LC_ALL to workaround pacemaker and OCLI CLI RuntimeError.

    Step 6Test the OCI secondary IP assignment script by running it on each node and watch how it assign the IP on OCI console.

    Step 7 Create the cluster VIP as follows:

    #pcs resource create service_VIP ocf:heartbeat:IPaddr2 ip=10.0.0.100 cidr_netmask=24 op monitor interval=20s

    Step 8Update the script /usr/lib/ocf/resource.d/heartbeat/IPaddr2 by referencing the assignment script created in the previous step within the add_interface () function at both nodes.

    add_interface () { local cmd msg ipaddr netmask broadcast iface label ipaddr="$1" netmask="$2" broadcast="$3" iface="$4" label="$5" ##### OCI/IPaddr Integration /usr/lib/ocf/resource.d/heartbeat/plumb_vip_oci.sh




    No comments