DevOpsElucidating How to Maintain Data Consistency, Accessibility and Retainability while using High Available (HA) Openshift clusters

October 7, 2020by CloudGen Admin

Elucidating How to Maintain Data Consistency, Accessibility and Retainability while using High Available (HA) Openshift clusters

October 7th, 2020

How are stateful workloads handled ?

How are distributed pods able to access the data all time ?

How is accessible data made consistent ?

And at times of failure , How is data retained ?

These are some of the questions that intrigues most of us while running our application on an High Available (HA) Openshift cluster .Let us find the answers

Firstly the pod workloads are handled by distributing it across the cluster to make the application highly available, it can be done by:

Using replica sets and Anti-Affinity rules

    • Replica sets gives us the pod count, increased number of pods would ensure that specified number of pods are available in an
      healthy state at any point of time.
    • Pod workload is handled by scheduling these pods on different computable nodes of the cluster and also making sure that no two similar app pods are on same node
    • For this purpose, combination of nodeAffinity and podAntiAffinity
      rules are set so that the application pods are distributed evenly.

Secondly, data consistency, accessibility and retainability is achieved by provisioning NFS storage that uses persistent volumes.

      • Here, NFS acts as persistent file storage device and we can make this storage instance available to cluster using static provisioning which requires creation of persistent volumes manually
      • Hence, NFS acts as persistent file storage device and creation of NFS directory acts as a shared volume

Below is the procedure for setting up :

1. Creating NFS-server EC2 Instance

      • Launch an Amazon Linux 2 AMI EC2 Instance (supports centos, rhel and fedora)
      • Configure Instance details with Openshift cluster VPC , associate
        related public subnet in the same zone , proceed with default settings.
      • configure security groups , allow NFS port 2049 .
      • proceed for launching the instance .
      • Now access the AWS terminal through putty and proceed for NFS-
        server installation

2. Installing the NFS-server

      • Install these packages on the CentOS server with yum

yum install nfs-utils

      • Create the directory that will be shared by NFS:

mkdir /var/nfsshare

      • Change the permissions of the folder as follows:

chmod -R 755 /var/nfsshare
chown nfsnobody:nfsnobody /var/nfsshare

      • Start the services and enable them to be started at boot time:

systemctl enable rpcbind
systemctl enable nfs-server
systemctl enable nfs-lock
systemctl enable nfs-idmap
systemctl start rpcbind
systemctl start nfs-server
systemctl start nfs-lock
systemctl start nfs-idmap

      • Share the NFS directory over the network as follows:

vim /etc/exports
/var/nfsshare 192.168.100.2 (rw,sync,no_root_squash,no_all_squash) ##192.168.100.2 is the NFS client IP
## Validate with below command
showmount -e 192.168.100.1 ## gives exports list.

      • Finally , start the NFS service

systemctl restart nfs-server

This completes setting up of NFS-server

3. At NFS-Client End

      • Connect to worker nodes of openshift cluster which are NFS-Clients .

## NFS Client installation inside worker nodes
yum install nfs-utils # need to install for non-coreOS

      • create the NFS directory mount points:

mkdir -p /mnt/nfs/var/nfsshare

      • we will mount the /var/nfsshare directory:

mount -t nfs 192.168.100.1 :/var/nfsshare /mnt/nfs/var/nfsshare/ #192.168.100.1 is the NFS-server IP

      • check if its mounted correctly :

df -h

4. Deploying Storage Class

      • Below is the sample storage class used , one can edit storage class name and the provisioner name accordingly.

#vim class.yml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-storage
annotations:
storageclass.kubernetes.io/is-default-class: “false”
provisioner: example.com/nfs
parameters:
.
archiveOnDelete: “false”

      • Once updated the class.yml file we can execute the file using oc create

# create storage class file
oc create -f class.yml
#check if storage class got created
oc get storageclass

5. Creating Persistent Volume (PV)and Persistent Volume Claims (PVC)

      • PV contains file storage details like NFS server IP and NFS shared path, thus it gets pointed to file storage device , that is to the NFS server. /b
        • Using a reclaim policy of “retain”, a persistent volume will not be deleted or erased. Hence the relationship between pod and storage can be re-established

## create pv
kind: PersistentVolume
apiVersion: v1
metadata:
name: pv-nfs
spec:
capacity:
storage: 1Gi
accessModes:
– ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: nfs-storage
nfs:
server: 192.168.100.1 ## nfs server IP
path: /var/nfsshare ## nfs server shared path
## creat pvc
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-nfs
namespace: < project name >
spec:
accessModes:
– ReadWriteMany
resources:
requests:
storage: 20Mi
volumeName: pv-nfs ## should be same as pv name
storageClassName: nfs-storage

## to create above scripts
oc create -f pv.yml
oc create -f pvc.yml

6. Creating Pods to use Persistent Volume Claims

    • Pods utilize this shared volume using persistent volume claim(PVC)

apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-nfs
labels:
app: deploy-nfs
spec:
replicas: 2
selector:
matchLabels:
app: deploy-nfs
template:
metadata:
labels:
app: deploy-nfs
spec:
containers:
– name: deploy-nfs1
image: busybox
command:
– sleep
– “3600”
resources:
requests:
cpu: “0.01”
volumeMounts:
– name: pv-nfs
mountPath: /mydata
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
– matchExpressions:
– key: kubernetes.io/hostname
operator: In
values:
– < worker1 hostname >
– < worker2 hostname >
– < worker3 hostname >
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
– labelSelector:
matchExpressions:
– key: app
operator: In
values:
– deploy-nfs
topologyKey: “kubernetes.io/hostname”
volumes:
– name: pv-nfs
persistentVolumeClaim:
claimName: pvc-nfs

# create the pod using deployment.yml file
oc create -f deployment.yml
# check for pods status
oc get pods -n < project name >

7. NFS Validation

  1. NFS-Server end, create a sample file and check if getting replicated to nfs client and in pod as well.
  2. NFS-Clients end, create a sample file and check if getting replicated to nfs server and in pod as well.
  3. At Pod level, create a file in the container mount path i.e /mydata , it would get reflected in NFS-server and in NFS-client machines.

Takeaways

we can now make our data persistent , accessible and
retainable through NFS by means of static provisioning that requires manual
creation of persistent volumes.

Whats ahead ?

Can we claim the persistent volume without actually creating it ?

Yes , without actually creating a persistent volume we can allocate storage volume just by creating persistent volume claim , this reduces time in bringing up the application.

In next post we will see how a dynamically provisioned NFS storage is set up .

WRITTEN BY

Saicharan Adavelli

DevOps Engineer| hands-on with Cloud Platform Infrastructure-AWS| Redhat
Openshift Certified