Skip to main content

Start using whole disk images with TripleO

What are the differences between flat partition image and whole disk image?

In order to understand this article, you first need to know what a flat partition image and a whole disk image are, and the differences between each other.
  • flat partition image: disk image that just contains all the desired content in a filesystem, but does not carry any information about partitions on it, and it does not include a bootloader. In order to boot from a whole disk image, the kernel and ramdisk images need to be passed independently when booting, relying on an external system to mount.
  • whole disk image: image that contains all the information about partitions, bootloaders... as well as all the desired content. It can boot independently, without the need of external kernels or systems to mount it.
Right now, OpenStack Ironic  supports both kind of images, but OpenStack TripleO was only supporting flat partition images.

TripleO added support for whole disk images

Since python-tripleoclient 5.6.0 version, TripleO supports the upload and usage of whole disk images. It will land officially in Ocata release.
Right now, the overcloud image upload command is performing the following steps:
  1. Upload overcloud-full.qcow2 image
  2. Upload vmlinuz and initrd related files
  3. Add a property on glance to overcloud-full image, to associate that with target vmlinuz and initrd files.
This approach is the desired one for flat partition images, because we need independent vmlinuz and initrd files to boot. But in the case of whole disk images, we just need the qcow2 file itself.

So a new flag has been added to the overcloud image upload command,  called --whole-disk, that will just upload the qcow2 image, skipping all the additional steps. The following command can be used to upload whole disk images:

openstack overcloud image upload --whole-disk

How to generate whole disk images

Now that TripleO can use whole disk images, there needs to be a way of building them. The standard TripleO overcloud image, is just a flat partition image. And although there are efforts in diskimage-builder project to produce whole images with the right partitions and volumes, this is Work in Progress and will not land in Ocata.
A simple alternative can be using guestfish to convert the flat partition overcloud image to a whole disk one. With a simple scripts that takes the initial image, and adds partitions on demand, and installs bootloader, you can produce a whole disk image that will be ready to use in TripleO.

A sample script will look like this:

#!/usr/bin/env python
import guestfs
import os

# remove old generated drive
try:
    os.unlink("/tmp/overcloud-full-partitioned.qcow2")
except:
    pass

g = guestfs.GuestFS(python_return_dict=True)

# import old and new images
print("Creating new repartitioned image")
g.add_drive_opts("/tmp/overcloud-full.qcow2", format="qcow2", readonly=1)
g.disk_create("/tmp/overcloud-full-partitioned.qcow2", "qcow2", 10.2 * 1024 * 1024 * 1024) #10.1G
g.add_drive_opts("/tmp/overcloud-full-partitioned.qcow2", format="qcow2", readonly=0)
g.launch()

# create the partitions for new image
print("Creating the initial partitions")
g.part_init("/dev/sdb", "mbr")
g.part_add("/dev/sdb", "primary", 2048, 616448)
g.part_add("/dev/sdb", "primary", 616449, -1)

g.pvcreate("/dev/sdb2")
g.vgcreate("vg", ['/dev/sdb2', ])
g.lvcreate("var", "vg", 5 * 1024)
g.lvcreate("tmp", "vg", 500)
g.lvcreate("swap", "vg", 250)
g.lvcreate("home", "vg", 100)
g.lvcreate("root", "vg", 4 * 1024)
g.part_set_bootable("/dev/sdb", 1, True)

# add filesystems to volumes
print("Adding filesystems")
ids = {}
keys = [ 'var', 'tmp', 'swap', 'home', 'root' ]
volumes = ['/dev/vg/var', '/dev/vg/tmp', '/dev/vg/swap', '/dev/vg/home', '/dev/vg/root']
swap_volume = volumes[2]

count = 0
for volume in volumes:
    if count!=2:
        g.mkfs('ext4', volume)
        ids[keys[count]] = g.vfs_uuid(volume)
    count +=1

# create filesystem on boot and swap
g.mkfs('ext4', '/dev/sdb1')
g.mkswap_opts(volumes[2])
ids['swap'] = g.vfs_uuid(volumes[2])

# mount drives and copy content
print("Start copying content")
g.mkmountpoint('/old')
g.mkmountpoint('/root')
g.mkmountpoint('/boot')
g.mkmountpoint('/home')
g.mkmountpoint('/var')
g.mount('/dev/sda', '/old')

g.mount('/dev/sdb1', '/boot')
g.mount(volumes[4], '/root')
g.mount(volumes[3], '/home')
g.mount(volumes[0], '/var')

# copy content to root
results = g.ls('/old/')
for result in results:
    if result not in ('boot', 'home', 'tmp', 'var'):
        print("Copying %s to root" % result)
        g.cp_a('/old/%s' % result, '/root/')

# copy extra content
folders_to_copy = ['boot', 'home', 'var']
for folder in folders_to_copy:
    results = g.ls('/old/%s/' % folder)
    for result in results:
        print("Copying %s to %s" % (result, folder))
        g.cp_a('/old/%s/%s' % (folder, result),
               '/%s/' % folder)

# create /etc/fstab file
print("Generating fstab content")
fstab_content = """
UUID={boot_id} /boot ext4 defaults 0 2
UUID={root_id} / ext4 defaults 0 1
UUID={swap_id} none swap sw 0 0
UUID={tmp_id} /tmp ext4 defaults 0 2
UUID={home_id} /home ext4 defaults 0 2
UUID={var_id} /var ext4 defaults 0 2
""".format(
    boot_id=g.vfs_uuid('/dev/sdb1'),
    root_id=ids['root'],
    swap_id=ids['swap'],
    tmp_id=ids['tmp'],
    home_id=ids['home'],
    var_id=ids['var'])

g.write('/root/etc/fstab', fstab_content)



# unmount filesystems
g.umount('/root')
g.umount('/boot')
g.umount('/old')
g.umount('/var')

# mount in the right directories to install bootloader
print("Installing bootloader")
g.mount(volumes[4], '/')
g.mkdir('/boot')
g.mkdir('/var')
g.mount('/dev/sdb1', '/boot')
g.mount(volumes[0], '/var')





# do a selinux relabel
g.selinux_relabel('/etc/selinux/targeted/contexts/files/file_contexts', '/', force=True)

g.selinux_relabel('/etc/selinux/targeted/contexts/files/file_contexts', '/var', force=True)


g.sh('grub2-install --target=i386-pc /dev/sdb')
g.sh('grub2-mkconfig -o /boot/grub2/grub.cfg')

# create dracut.conf file
dracut_content = """
add_dracutmodules+="lvm crypt"
"""
g.write('/etc/dracut.conf', dracut_content)

# update initramfs to include lvm and crypt
kernels = g.ls('/lib/modules')
for kernel in kernels:
    print("Updating dracut to include modules in kernel %s" % kernel)
    g.sh('dracut -f /boot/initramfs-%s.img %s --force' % (kernel, kernel))
g.umount('/boot')
g.umount('/var')
g.umount('/')

# close images
print("Finishing image")
g.shutdown()
g.close()

This sample script will create a whole disk image with the following steps:
  1. Open the old overcloud-full image (flat partition) to use it as a base to generate new one
  2. Create a new image (the whole one), with the desired size (for example 10gb)
  3. Create partitions and volumes on the new image. Note that you can create the partitions and volumes you desire, with the sizes that match your environment. In the example we create an isolated partition for boot, and we go with volumes for the other content in the filesystem.
  4. Create the initial filesystems on the partitions and volumes. You can add ext4, xfs, swap partitions... depending on your needs
  5. Mount filesystems and start copying content to the right partitions from the origin to the target image. See that the target partition is mounted as /root, not as / , because it will give naming conflicts when moving old content to new if not.
  6. Generate and copy the /etc/fstab  to the target image. You need to capture the UUID of the generated filesystems properly, and reflect on the generated /etc/fstab file
  7. Unmount all the filesystems, either in the old and the new image
  8. Mount the partitions only in the target image now. Do it with the right naming (root under /, and use the right /boot and /var partitions as well)
  9. Install the bootloader properly. The best way to do it is running a shell on the chroot that generates the image, calling grub2-install and grub2-mkconfig . This will install grub2 bootloader in the whole disk image
  10. Unmount all the filesystems and close the image
  11. After that you will have a overcloud-full-partitioned image, that will be ready to use on TripleO deployments
Thanks to Pino Toscano and Richard Jones for really good advices on the guestfs process.

Comments

  1. Hi

    If we deploy the whole disk image built by above script, can it use all the disk space available on the overcloud machines.
    How can we make it use all the disk space.

    ReplyDelete
    Replies
    1. So you have a pair of options here:
      - first one, if you know the size of your disk. You can define the size of the volumes according to the size of your disk. Of course then you need different images depending on the size of the disk, but it will be an easy way to achieve it.
      - second one, grow volumes after deployment. On first deploy the initial partition and volumes will be created, with the fixed size you gave when creating your image. After that, you can create a new partition, with the remaining disk size that has not been used. Then you can create a physical volume that is using that partition, and extend the volume group by adding this new physical volume (with vgextend). As you will have now a volume group with extra space, then you can grow the logical volumes to pick the extra space. Finally, if you used xfs, you can just increase the filesystem size dynamically, using xfs_growfs.

      Delete

Post a Comment

Popular posts from this blog

Enable UEFI PXE boot in Supermicro SYS-E200

When provisioning my Supermicro SYS-E200-8D machines (X10 motherboard), i had the need to enable UEFI boot mode, and provision through PXE. This may seem straightforward, but there is a set of BIOS settings that need to be changed in order to enable it. First thing is to enable EFI on LAN , and enable Network Stack. To do that, enter into BIOS > Advanced > PCIe/PCI/PnP configuration and check that your settings match the following: See that PCI-E have EFI firmware loaded. Same for Onboard LAN OPROM and Onboard Video OPROM. And UEFI Network stack is enabled , as well as IPv4 PXE/IPv6 PXE support. Next thing is to modify boot settings. The usual boot order for PXE is to first add hard disk and second PXE network . The PXE tools (for example Ironic) will set a temporary boot order for PXE (one time) to enable the boot from network, but then the reboot will be done from hard disk. So be sure that your boot order matches the following: See that the first order is hard d

Test API endpoint with netcat

Do you need a simple way to validate that an API endpoint is responsive, but you don't want to use curl? There is a simple way to validate the endpoint with nc, producing an output that can be redirected to a logfile and parsed later: URL=$1 PORT=$2 while true; do     RESULT=$(nc -vz $URL $PORT 2>&1)     DATE=$(date)     echo $DATE $RESULT     sleep 1 done You can all this script with the API URL as first parameter, and API port as the second. netcat will be accessing to that endpoint and will report the results, detecting when the API is down. We also can output the date to have a reference when failures are detected. The produced output will be something like: vie jun 26 08:19:28 UTC 2020 Ncat: Version 7.70 ( https://nmap.org/ncat ) Ncat: Connected to 192.168.111.3:6443. Ncat: 0 bytes sent, 0 bytes received in 0.01 seconds. vie jun 26 08:19:29 UTC 2020 Ncat: Version 7.70 ( https://nmap.org/ncat ) Ncat: Connected to 192.168.111.3:6443. Ncat: 0 bytes sent, 0 bytes

Setup an NFS client provisioner in Kubernetes

Setup an NFS client provisioner in Kubernetes One of the most common needs when deploying Kubernetes is the ability to use shared storage. While there are several options available, one of the most commons and easier to setup is to use an NFS server. This post will explain how to setup a dynamic NFS client provisioner on Kubernetes, relying on an existing NFS server on your systems. Step 1. Setup an NFS server (sample for CentOS) First thing you will need, of course, is to have an NFS server. This can be easily achieved with some easy steps: Install nfs package: yum install -y nfs-utils Enable and start nfs service and rpcbind: systemctl enable rpcbind systemctl enable nfs-server systemctl start rpcbind systemctl start nfs-server Create the directory that will be shared by NFS, and change the permissions: mkdir /var/nfsshare chmod -R 755 /var/nfsshare chown nfsnobody:nfsnobody /var/nfsshare  Share the NFS directory over the network, creating the /etc/exports file: vi /