Skip to main content

Setup an NFS client provisioner in Kubernetes

Setup an NFS client provisioner in Kubernetes

One of the most common needs when deploying Kubernetes is the ability to use shared storage. While there are several options available, one of the most commons and easier to setup is to use an NFS server.
This post will explain how to setup a dynamic NFS client provisioner on Kubernetes, relying on an existing NFS server on your systems.

Step 1. Setup an NFS server (sample for CentOS)

First thing you will need, of course, is to have an NFS server. This can be easily achieved with some easy steps:

  • Install nfs package: yum install -y nfs-utils
  • Enable and start nfs service and rpcbind:
    systemctl enable rpcbind
    systemctl enable nfs-server
    systemctl start rpcbind
    systemctl start nfs-server

  • Create the directory that will be shared by NFS, and change the permissions:
    mkdir /var/nfsshare
    chmod -R 755 /var/nfsshare
    chown nfsnobody:nfsnobody /var/nfsshare

  •  Share the NFS directory over the network, creating the /etc/exports file:
    vi /etc/exports
    /var/nfsshare * (rw,sync,no_root_squash,no_all_squash)
  • Restart the nfs service to apply the content:
    systemctl restart nfs-server
  • Add NFS and rpcbind services to firewall:
    firewall-cmd --permanent --zone=public --add-service=nfs
    firewall-cmd --permanent --zone=public --add-service=rpcbind
    firewall-cmd --reload


    The NFS server is now ready to be used

Step 2. Install NFS client provisioner

To achieve that, we will rely on Kubernetes external storage provisioner (https://github.com/kubernetes-incubator/external-storage) . An external provisioner is a dynamic volume provisioner, whose code lives outside kubernetes code.
It relies on an StorageClass object, that defines the external provisioner instance. Then, that instance will wait for PersistentVolumeClaims asking for that specific StorageClass, and will automatically create PersistentVolumes.
In that case we are relying on NFS-client provisioner (https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client) , that will provide those volumes, relying on an existing NFS server.

In order to use that, several steps are needed:
  • Clone the external-storage repository and switch to the nfs-client folder:
    git clone https://github.com/kubernetes-incubator/external-storage
    cd external-storage/nfs-client
  • Customize the deploy/class.yaml file, to give a custom provisioner name to your instance:
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: managed-nfs-storage
    provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'
    parameters:
      archiveOnDelete: "false"
  • Customize the deploy/deployment.yaml file, to specify the location and folder for your NFS server, and to give the right provisioner name:
    kind: ServiceAccount
    metadata:
      name: nfs-client-provisioner
    ---
    kind: Deployment
    apiVersion: extensions/v1beta1
    metadata:
      name: nfs-client-provisioner
    spec:
      replicas: 1
      strategy:
        type: Recreate
      template:
       metadata:
        labels:
          app: nfs-client-provisioner
       spec:
         serviceAccountName: nfs-client-provisioner
         containers:
           - name: nfs-client-provisioner
             image: quay.io/external_storage/nfs-client-provisioner:latest
             volumeMounts:
               - name: nfs-client-root
                 mountPath: /persistentvolumes
             env:
               - name: PROVISIONER_NAME
                 value: fuseim.pri/ifs
    # or choose another name, must match
               - name: NFS_SERVER
                 value: <<IP_OF_YOUR_NFS_SERVER>>
               - name: NFS_PATH
                 value: <<PATH_TO_NFS_SHARED_FOLDER>>
         volumes:
           - name: nfs-client-root
             nfs:
               server: <<IP_OF_YOUR_NFS_SERVER>>
               path: <<PATH_TO_NFS_SHARED_FOLDER>>
               apiVersion: v1
  • Create the objects into your kubernetes cluster:
    kubectl create -f deploy/rbac.yaml
    kubectl create -f deploy/class.yaml
    kubectl create -f deploy/deployment.yaml
  • To actually start testing the system, you will need to create a PersistentVolumeClaim using that StorageClass. Then you will need to create a pod that uses this PersistentVolumeClaim:
    claim.yaml
    ----------
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: test-claim
      annotations:
        volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
    spec:
      accessModes:
        - ReadWriteMany
      resources:
        requests:
          storage: 1Mi

    pod.yaml
    --------
    kind: Pod

    apiVersion: v1
    metadata:
      name: test-pod
    spec:
      containers:
       - name: test-pod
         image: gcr.io/google_containers/busybox:1.24
         command:
           - "/bin/sh"
         args:
           - "-c"
           - "touch /mnt/SUCCESS && exit 0 || exit 1"
         volumeMounts:
           - name: nfs-pvc
             mountPath: "/mnt"
    restartPolicy: "Never"
    volumes:
       - name: nfs-pvc
         persistentVolumeClaim:
           claimName: test-claim


    kubectl create -f deploy/claim.yaml
    kubectl create -f deploy/pod.yaml
    You can see that the pods can be started now with NFS volumes that are dynamically provisioned, relying on your existing NFS server installation.

Comments

Popular posts from this blog

Build and use security hardened images with TripleO

Starting to apply since Pike Concept of security hardened images Normally the images used for overcloud deployment in TripleO are not security hardened. It means, the images lack all the extra security measures needed to accomplish with ANSSI requirements. These extra measures are needed to deploy TripleO in environments where security is an important feature.
The following recommendations are given to accomplish with security guidelines:
ensure that /tmp is mounted on a separate volume or partition, and that it is mounted with rw,nosuid,nodev,noexec,relatime flagsensure that /var, /var/log and /var/log/audit are mounted on separates volumes or partitions, and that are mounted with rw,relatime flags.ensure that /home is mounted on a separate partition or volume, and that it is mounted with rw,nodev,relatime flags.include extra kernel boot flag to enable auditing: add audit=1 to GRUB_CMDLINE_LINUX settingdisable kernel support for USB via bootloader configuration: add nousb to GRUB_CMD…

How to deploy TripleO Queens without external network

TripleO Queens has an interesting feature that is called 'composable networks'. It allows to deploy Openstack with the choice of networks that you want, depending on your environment. Please see: https://docs.openstack.org/tripleo-docs/latest/install/advanced_deployment/custom_networks.html

By default, the following networks are defined:
StorageStorage ManagementInternal ApiTenantManagementExternal The external network allows to reach the endpoints externally, and also to define networks to reach the vms externally as well. But to have that, it is needed to have a network with external access, routable, on your lab. Not all labs have it, specially for CI environments, so it may be useful to deploy without it, and just have internal access to endpoints and vms. In this blogpost i'm just going to explain how to achieve it.

First make a copy of your original tripleo-heat-templates, to another directory /home/stack/working-templates, and edit the following files:
network_data.…

Start using whole disk images with TripleO

What are the differences between flat partition image and whole disk image? In order to understand this article, you first need to know what a flat partition image and a whole disk image are, and the differences between each other.
flat partition image: disk image that just contains all the desired content in a filesystem, but does not carry any information about partitions on it, and it does not include a bootloader. In order to boot from a whole disk image, the kernel and ramdisk images need to be passed independently when booting, relying on an external system to mount.whole disk image: image that contains all the information about partitions, bootloaders... as well as all the desired content. It can boot independently, without the need of external kernels or systems to mount it. Right now, OpenStack Ironic  supports both kind of images, but OpenStack TripleO was only supporting flat partition images.

TripleO added support for whole disk images Since python-tripleoclient 5.6.0 ver…