Skip to main content

Customize OpenStack images for booting from ISCSI

When working with OpenStack Ironic and Tripleo, and using the boot from ISCSI feature, you may need to add some kernel parameters into the deployment image for that to work.
When using some specific hardware, you may need that the deployment image contains some specific kernel parameters on boot. For example, when trying to boot from ISCSI with IBFT nics, you need to add following kernel parameters:

rd.iscsi.ibft=1 rd.iscsi.firmware=1 

 The TripleO image that is generated by default doesn't contain those parameters, because they are very specific depending on the hardware you need. It is not also possible right now to send this parameters through Ironic.

The solution will be to customize the deployment image to add these kernel parameters. The overcloud-full.qcow2 image that comes by default with TripleO is a partition image. It means that the bootloader is not previously installed, but it is done through Ironic. So the way to add custom parameters, is modifying the /etc/default/grub file, adding those parameters.

So let's do it step by step:

1. On the undercloud, install libguestfs-tools: sudo yum install -y libguestfs-tools
2. On /home/stack, you will find overcloud-full.qcow2 image. Start customizing it:

export LIBGUESTFS_BACKEND=direct
guestfish -a overcloud-full.qcow2
<fs> run

<fs> list-filesystems
<fs> mount /dev/sda /
<fs> vi /etc/default/grub

GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT="console"
GRUB_CMDLINE_LINUX="console=tty0 console=ttyS0,115200n8 crashkernel=auto rhgb quiet"
GRUB_DISABLE_RECOVERY="true"

Edit GRUB_CMDLINE_LINUX and append the kernel parameters you need there. Exit the editor saving the file, and exit from guestfish

<fs> exit

Now is time to upload the modified image:

source /home/stack/stackrc
openstack overcloud image upload --existing

Now redeploy again, and the image will boot with the modified kernel parameters, allowing boot from ISCSI to work with your hardware.

Comments

  1. Great article. Happy to visit your blog. Thanks for sharing.

    OpenStack Training

    ReplyDelete
  2. This is an awesome post.Really very informative and creative contents. These concept is a good way to enhance the knowledge.I like it and help me to article very well.Thank you for this brief explanation and very nice information.Well, got a good knowledge.
    DedicatedHosting4u.com

    ReplyDelete

Post a Comment

Popular posts from this blog

Enable UEFI PXE boot in Supermicro SYS-E200

When provisioning my Supermicro SYS-E200-8D machines (X10 motherboard), i had the need to enable UEFI boot mode, and provision through PXE. This may seem straightforward, but there is a set of BIOS settings that need to be changed in order to enable it. First thing is to enable EFI on LAN , and enable Network Stack. To do that, enter into BIOS > Advanced > PCIe/PCI/PnP configuration and check that your settings match the following: See that PCI-E have EFI firmware loaded. Same for Onboard LAN OPROM and Onboard Video OPROM. And UEFI Network stack is enabled , as well as IPv4 PXE/IPv6 PXE support. Next thing is to modify boot settings. The usual boot order for PXE is to first add hard disk and second PXE network . The PXE tools (for example Ironic) will set a temporary boot order for PXE (one time) to enable the boot from network, but then the reboot will be done from hard disk. So be sure that your boot order matches the following: See that the first order is hard d

Test API endpoint with netcat

Do you need a simple way to validate that an API endpoint is responsive, but you don't want to use curl? There is a simple way to validate the endpoint with nc, producing an output that can be redirected to a logfile and parsed later: URL=$1 PORT=$2 while true; do     RESULT=$(nc -vz $URL $PORT 2>&1)     DATE=$(date)     echo $DATE $RESULT     sleep 1 done You can all this script with the API URL as first parameter, and API port as the second. netcat will be accessing to that endpoint and will report the results, detecting when the API is down. We also can output the date to have a reference when failures are detected. The produced output will be something like: vie jun 26 08:19:28 UTC 2020 Ncat: Version 7.70 ( https://nmap.org/ncat ) Ncat: Connected to 192.168.111.3:6443. Ncat: 0 bytes sent, 0 bytes received in 0.01 seconds. vie jun 26 08:19:29 UTC 2020 Ncat: Version 7.70 ( https://nmap.org/ncat ) Ncat: Connected to 192.168.111.3:6443. Ncat: 0 bytes sent, 0 bytes

Setup an NFS client provisioner in Kubernetes

Setup an NFS client provisioner in Kubernetes One of the most common needs when deploying Kubernetes is the ability to use shared storage. While there are several options available, one of the most commons and easier to setup is to use an NFS server. This post will explain how to setup a dynamic NFS client provisioner on Kubernetes, relying on an existing NFS server on your systems. Step 1. Setup an NFS server (sample for CentOS) First thing you will need, of course, is to have an NFS server. This can be easily achieved with some easy steps: Install nfs package: yum install -y nfs-utils Enable and start nfs service and rpcbind: systemctl enable rpcbind systemctl enable nfs-server systemctl start rpcbind systemctl start nfs-server Create the directory that will be shared by NFS, and change the permissions: mkdir /var/nfsshare chmod -R 755 /var/nfsshare chown nfsnobody:nfsnobody /var/nfsshare  Share the NFS directory over the network, creating the /etc/exports file: vi /