Skip to main content

Add RHEL8 nodes to OpenShift deployments

This blogpost is going to show how to automatically enroll RHEL 8 (realtime) to Openshift 4.1 deployments, using UPI method.
This assumes that you will setup an OpenShift cluster using UPI, following the according documentation: https://docs.openshift.com/container-platform/4.1/installing/installing_bare_metal
The procedure on how to spin up this cluster in a semi-automated way is also shown at https://github.com/redhat-nfvpe/upi-rt . This article will assume that you have this UPI cluster up and running.

Enroll RHEL 8 nodes as workers

By default, all nodes added into an OpenShift cluster are based on RHCOS. But there are use caes where you may need RHEL nodes. This is the case of RT (real time) nodes, where you need an specific kernel.
This can be achieved with the help of kickstart, and some specific configuration of PXE kernel args (in this case achieved with matchbox).
We are going to use RHEL8 images, booted with PXE, but we are going to add some specific configuration to allow them to join an existing OCP cluster, using a kickstart file.

RHEL8 pxe images and ISO preparation

The first step before starting the install will be to download the installation source. In order to do it in a disconnected way, you need first to download the RHEL 8 DVD ISO that can be found at https://access.redhat.com/downloads/content/479/ver=/rhel---8/8.0/x86_64/product-software (Binary DVD one) .
Once downloaded, you need to mount the ISO and copy the content to the http server that is going to be used (/var/lib/matchbox/assets in our case):

mkdir /tmp/mnt_rhel8/
mount -o loop /tmp/rhel8.iso /tmp/mnt_rhel8/
mkdir /var/lib/matchbox/assets/rhel8
cp -ar /tmp/mnt_rhel8/. /var/lib/matchbox/assets/rhel8/
chmod -R 755 /var/lib/matchbox/assets/rhel8

The pxe images (initrd and vmlinuz) can be found at /var/lib/matchbox/assets/rhel8/images/pxeboot.
This will be useful later for configuring the kickstart file and matchbox profiles.

Kickstart file generation

Kickstart file generation can be easily achieved by using the helper scripts on: https://github.com/redhat-nfvpe/upi-rt/tree/master/kickstart . This directory contain several helper scripts for different distros, and specific realtime configs. We are going to focus on https://github.com/redhat-nfvpe/upi-rt/blob/master/kickstart/add_kickstart_for_rhel8_rt.sh , that generates a kickstart file to allow a RHEL8 RT node to join an existing cluster.

Before running this script, we need to know the following settings. The scripts relies on those vars being present at a $HOME/settings_upi.env file, in order to source it. So we will create this file with the following content:
  • CLUSTER_NAME: name of the OCP cluster
  • CLUSTER_DOMAIN: domain for the cluster
  • PULL_SECRET: pull secret that can be extracted from https://cloud.redhat.com/openshift/install/metal/user-provisioned
  • KUBECONFIG_PATH: path to the kubeconfig file that has been generated. When using UPI, it is auth/kubeconfig in the directory where you generated your ignition files.
  • ROOT_PASSWORD: a root password in case you need to login to your server by console
  • RH_USERNAME, RH_PASSWORD, RH_POOL: username/pass and pool id matching your Red Hat subscription. You need to have permissions to grab RHEL 8 repositories with that pool.
  • RHEL_INSTALL_ENDPOINT: path for your uncompressed DVD ISO (if using matchbox it will be http://${PROVISIONING_IP}:8080/assets/rhel8
With this file on place, you can just run the add_kickstart_for_rhel_rt.sh script, and it will generate a rhel8-rt-worker-kickstart.cfg , that will perform the automated install. This file needs to be copied to /var/lib/matchbox/assets directory.

What is kickstart doing?

The generated kickstart is executing a typical RHEL8 unattended install, but also performing those extra tasks:
  • write /etc/profile.env with the subscription data, to be used later for registering the system and subscribing to the right repos
  • create a core user , with root permissions, and adding your pubkey (in $HOME/.ssh/id_rsa.pub) into authorized keys.
  • write the pull secret (that has been passed in settings_upi.env), into a temporary file, to be used later.
  • write the kubeconfig file (that has been passed in settings_upi.env) to /root/.kube/config file, to be used later.
  • write the ignition endpoint (hardcoded to http://api.$CLUSTER_NAME.$CLUSTER_DOMAIN:2263/config/worker) in a temporary file, to be used later.
  • subscribe the system with the credentials provided, and subscribe to rhel-8-for-x86_64-baseos-rpms, rhel-8-for-x86_64-appstream-rpms and rhocp-4.1-for-rhel-8-x86_64-rpms repos.
  • Install needed packages, dependencies to work as an OpenShift node . These dependencies include packages like cri-o, hyperkube, openshift-clients, etc...
  • Do system adjustments: disabling swap, enabling cri-o, enable ip forwarding, manage selinux cgroups, etc...
  • Grab the content from the previously written ignition endpoint, and store it on a temporary file (/tmp/bootstrap.ign), to be used later.
  • Create a runignition.service, that will be run just one time, that will perform the enrollment of the node.
In the case of real time, enable rhel-8-for-x86_64-rt-rpms repo, and install the RT bits.
After that, reboot the system to allow runignition service to run.

What is runignition service doing?

This service needs to be executed outside kickstart, because it needs to run podman, so it cannot be run inside kickstart chroot. It performs those steps:
  • Gets the version of the cluster performing an oc get clusterversion, using the /root/.kube/config credentials.
  • With this cluster version, downloads the image of machine-config-daemon that matches
  • Uses podman to run this image with machine-config-daemon, passing the content downloaded before (/tmp/bootstrap.ign). This contains all the tasks that a worker needs to perform in order to enroll a cluster.
  • After that is completed, reboot the node
Next time that the node starts, it will join the cluster, and you will have RHEL 8 worker nodes.

How to configure PXE boot for the worker.

This repository is using terraform and matchbox in order to perform the automation of pxe booting. But really any other pxe servers will work. The important thing to be considered are the kernel parameters and pxe images. So, in order to make it work you will need to pass those parameters on PXE:
  • kernel: assets/rhel8_vmlinuz (or full url if needed)
  • initrd: assets/rhel8_initrd.img (or full url if needed)
  • console=tty0 (console=ttyS0,115200n8 , or more if needed)
  • rd.neednet=1
  • inst.ks=http://provisioning_url/assets/rhel8-rt-worker-kickstart.cfg
When using those parameters and booting from PXE, the server will start with RHEL8 PXE images, and will perform an automated installation, with the extra bits described in the previous section.


After it has completed, you may need to approve certs as explained on https://docs.openshift.com/container-platform/4.1/installing/installing_bare_metal/installing-bare-metal.html#installation-approve-csrs_installing-bare-metal . You may also need to perform any additional configuration on your cluster, according to the documentation.

Comments

Post a Comment

Popular posts from this blog

Enable UEFI PXE boot in Supermicro SYS-E200

When provisioning my Supermicro SYS-E200-8D machines (X10 motherboard), i had the need to enable UEFI boot mode, and provision through PXE. This may seem straightforward, but there is a set of BIOS settings that need to be changed in order to enable it. First thing is to enable EFI on LAN , and enable Network Stack. To do that, enter into BIOS > Advanced > PCIe/PCI/PnP configuration and check that your settings match the following: See that PCI-E have EFI firmware loaded. Same for Onboard LAN OPROM and Onboard Video OPROM. And UEFI Network stack is enabled , as well as IPv4 PXE/IPv6 PXE support. Next thing is to modify boot settings. The usual boot order for PXE is to first add hard disk and second PXE network . The PXE tools (for example Ironic) will set a temporary boot order for PXE (one time) to enable the boot from network, but then the reboot will be done from hard disk. So be sure that your boot order matches the following: See that the first order is hard d

Test API endpoint with netcat

Do you need a simple way to validate that an API endpoint is responsive, but you don't want to use curl? There is a simple way to validate the endpoint with nc, producing an output that can be redirected to a logfile and parsed later: URL=$1 PORT=$2 while true; do     RESULT=$(nc -vz $URL $PORT 2>&1)     DATE=$(date)     echo $DATE $RESULT     sleep 1 done You can all this script with the API URL as first parameter, and API port as the second. netcat will be accessing to that endpoint and will report the results, detecting when the API is down. We also can output the date to have a reference when failures are detected. The produced output will be something like: vie jun 26 08:19:28 UTC 2020 Ncat: Version 7.70 ( https://nmap.org/ncat ) Ncat: Connected to 192.168.111.3:6443. Ncat: 0 bytes sent, 0 bytes received in 0.01 seconds. vie jun 26 08:19:29 UTC 2020 Ncat: Version 7.70 ( https://nmap.org/ncat ) Ncat: Connected to 192.168.111.3:6443. Ncat: 0 bytes sent, 0 bytes

Setup an NFS client provisioner in Kubernetes

Setup an NFS client provisioner in Kubernetes One of the most common needs when deploying Kubernetes is the ability to use shared storage. While there are several options available, one of the most commons and easier to setup is to use an NFS server. This post will explain how to setup a dynamic NFS client provisioner on Kubernetes, relying on an existing NFS server on your systems. Step 1. Setup an NFS server (sample for CentOS) First thing you will need, of course, is to have an NFS server. This can be easily achieved with some easy steps: Install nfs package: yum install -y nfs-utils Enable and start nfs service and rpcbind: systemctl enable rpcbind systemctl enable nfs-server systemctl start rpcbind systemctl start nfs-server Create the directory that will be shared by NFS, and change the permissions: mkdir /var/nfsshare chmod -R 755 /var/nfsshare chown nfsnobody:nfsnobody /var/nfsshare  Share the NFS directory over the network, creating the /etc/exports file: vi /