Skip to main content

Generate Fedora Atomic images using diskimage-builder

About Atomic project - http://www.projectatomic.io

Atomic is a lightweight operating system, assembled from RPM content. It is mainly designed to run applications in Docker containers. Hosts based on RHEL, Fedora and CentOS are available with Atomic.
Project Atomic includes the following components: Docker, Kubernetes, rpm-ostree, systemd

What are the advantages of Atomic? Using Atomic distributions limits the patch frequency for administrators. The usage of Docker containers offers a clear path to deliver consistent and fully tested stacks. Containers secured with Linux namespaces, cGroups, and SELinux give isolation close to that of a VM, with much greater flexibility and efficiency.

About diskimage-builder - http://docs.openstack.org/developer/diskimage-builder/

Diskimage-builder is a tool for building disk images, file system images and ramdisk images. It is the tool used in OpenStack projects to generate base images for deployments and testing.

The problem: An overly-manual process.

Atomic images are used in Magnum - the Containers project in OpenStack. It uses Heat to orchestrate an OS image, which contains Docker and Kubernetes, and runs that image in either virtual machines or bare metal in a cluster configuration.
Magnum uses a pre-built Fedora Atomic image, shipped with Docker, Kubernetes, etcd and Flannel. The image was manually built, then uploaded to FedoraPeople: http://docs.openstack.org/developer/magnum/dev/dev-build-atomic-image.html
This process is not optimal, because it involves several manual steps, and it adds an external dependency from FedoraPeople site when there is a need to download that image.

The solution: Automate image creation with diskimage-builder

diskimage-builder allows to generate custom images, with a base distro, adding a set of custom elements. Using that process, we are able to consume a basic Fedora 23 image, and add the elements needed to convert that to an Atomic image, with all the components we need, to become a valid Atomic image used in Magnum.

Anatomy of fedora-atomic element: https://git.openstack.org/cgit/openstack/magnum/tree/magnum/elements/fedora-atomic

To generate the Atomic image with diskimage-builder, we need to create an element, that depends on fedora-minimal element, and adds the extra configuration needed to convert that image into Atomic.

element-deps - Dependencies on other diskimage-builder elements
  • fedora-minimal: element that provides a basic Fedora image. The desired version can be specified exporting the DIB_RELEASE var (export DIB_RELEASE=23)
  • growroot: element that forces the root partition to grow at first boot
  • package-install: element that will setup the dependencies needed to install extra packages required by fedora-minimal. It will install ostree package, that will be needed to generate the Atomic tree.
  • vm: This element will partition the base image disk, to have a boot partition where the operating system will be installed. It depends on bootloader element.
  • bootloader: It installs grub(2) on boot partition. This will allow the image to automatically boot with the operating system installed.

environment.d - Environment settings needed
It exports the FEDORA_ATOMIC_TREE_URL and FEDORA_ATOMIC_TREE_REF vars. First one points to https://dl.fedoraproject.org/pub/fedora/linux/atomic/23/  . That is the URL where deployed tree for Fedora Atomic 23 is stored. Second one points to 954bdbeebebfa87b625d9d7bd78c81400bdd6756fcc3205987970af4b64eb678 . That is the reference to docker-host branch, that will be used for our deployments. We use docker-host branch because it contains all the needed components for Magnum (Docker, Kubernetes, etcd, Flannel) 

package-installs - Extra packages needed
It only installs ostree package, that will be used to deploy our Atomic tree.

finalise.d - Scripts that run when image setup finishes
This contains the script that will deploy the Atomic tree, based on the environment vars provided before. It performs the following steps:

1. Deploy ostree in root directory:
ostree admin os-init fedora-atomic
ostree remote add --set=gpg-verify=false fedora-atomic ${FEDORA_ATOMIC_TREE_URL}
ostree pull fedora-atomic ${FEDORA_ATOMIC_TREE_REF}
ostree remote delete fedora-atomic
ostree admin deploy --os=fedora-atomic ${FEDORA_ATOMIC_TREE_REF} \
--karg-proc-cmdline --karg=selinux=0
 
2. Copy original /etc/fstab content to the target ostree directory, so the mounts are preserved:
SYSROOT=/ostree/deploy/fedora-atomic/deploy/${FEDORA_ATOMIC_TREE_REF}.0
cp /etc/fstab $SYSROOT/etc/
 
3. Dynamically find the directory where tree has been deployed, and the associated initramfs image, in order to configure grub bootloader later:
DEPLOYED_DIRECTORY=$(find /boot/ostree -name fedora-atomic-* -type d)
DEPLOYED_ID=${DEPLOYED_DIRECTORY##*-}
INIT_IMAGE=$(find ${DEPLOYED_DIRECTORY} -name initramfs*.img)
 
4. Update grub2 bootloader contents, to force the image to boot with the contents of deployed tree, instead of original Fedora image:
cat > /etc/grub.d/15_ostree <<EOF
cat <<EOL
menuentry 'Fedora 23 (ostree)' --class gnu-linux --class gnu --class os \
--unrestricted "ostree-0-${DIB_IMAGE_ROOT_FS_UUID}" {
set gfxpayload=text
insmod gzio
insmod part_msdos
insmod ext2
search --no-floppy --set=root --label ${DIB_ROOT_LABEL}
linux16 ${DEPLOYED_DIRECTORY}/vmlinuz root=LABEL=${DIB_ROOT_LABEL} ro nofb nomodeset \
vga=normal console=tty0 console=ttyS0,115200 no_timer_check rd.shell=0 \
ostree=/ostree/boot.1/fedora-atomic/${DEPLOYED_ID}/0
initrd16 ${INIT_IMAGE}
}
EOL
EOF
chmod +x /etc/grub.d/15_ostree

5. Update services that will run on boot, because cloud-init is disabled by default on the tree we are consuming. As we need to consume these images as a VM, we need a functional cloud-init:

ln -sf $SYSROOT/usr/lib/systemd/system/cloud-config.service \
$SYSROOT/etc/systemd/system/multi-user.target.wants/cloud-config.service
ln -sf $SYSROOT/usr/lib/systemd/system/cloud-final.service \
$SYSROOT/etc/systemd/system/multi-user.target.wants/cloud-final.service
ln -sf $SYSROOT/usr/lib/systemd/system/cloud-init.service \
$SYSROOT/etc/systemd/system/multi-user.target.wants/cloud-init.service
ln -sf $SYSROOT/usr/lib/systemd/system/cloud-init-local.service \
$SYSROOT/etc/systemd/system/multi-user.target.wants/cloud-init-local.service

6. Update docker storage, to use a thinly-provisioned logical volume:
cat > $SYSROOT/etc/sysconfig/docker-storage <<EOF
DOCKER_STORAGE_OPTIONS="--storage-opt dm.thinpooldev=/dev/mapper/docker-docker--pool"
EOF

7.Disable the docker-storage setup service, because we are just forcing to use the provided volume in previous step:
rm $SYSROOT/etc/systemd/system/multi-user.target.wants/docker-storage-setup.service

8. Clean up previous grub configuration, and generate new one:
rm /etc/grub.d/10_linux
grub2-mkconfig -o /boot/grub2/grub.cfg 

9. Perform an image cleanup, removing the old Fedora images, and the packages not needed, to reduce the size of the final image:
rm -rf /boot/vmlinuz*
rm -rf /boot/initramfs*
 
if [ $DIB_RELEASE -ge 22 ]; then
    dnf -y remove dracut grubby kernel initscript man-pages redhat-lsb-core \
selinux-policy selinux-policy-targeted
    dnf autoremove
    dnf clean all
else
    yum -y remove dracut grubby kernel initscript man-pages redhat-lsb-core \
selinux-policy selinux-policy-targeted
    yum autoremove
    yum clean all
fi 

These are the steps to convert an initial Fedora image onto an Atomic one.

How to generate the final image

1. Install dependencies:

- diskimage-builder: http://docs.openstack.org/developer/diskimage-builder/user_guide/installation.html
- magnum project: https://git.openstack.org/cgit/openstack/magnum 
- extra packages: python-dev, build-essential, python-pip, kpartx, python-lzma, qemu-tils, yum, yum-utils

2. Export the environment variables to configure the image build:
    export ELEMENTS_PATH=/path/to/diskimage-builder/elements:/path/to/magnum/elements
    export DIB_RELEASE=23     # this can be switched to the desired version
    export DIB_IMAGE_SIZE=2.2   # we need to give a bit more space to loopback device
 
3.  Create the image using diskimage-builder:
    disk-image-create fedora-atomic

This will generate a fedora-atomic.qcow2 image, that you can use to upload to Glance, boot with libvirt, or convert to any other formats.

Comments

Popular posts from this blog

Enable UEFI PXE boot in Supermicro SYS-E200

When provisioning my Supermicro SYS-E200-8D machines (X10 motherboard), i had the need to enable UEFI boot mode, and provision through PXE. This may seem straightforward, but there is a set of BIOS settings that need to be changed in order to enable it. First thing is to enable EFI on LAN , and enable Network Stack. To do that, enter into BIOS > Advanced > PCIe/PCI/PnP configuration and check that your settings match the following: See that PCI-E have EFI firmware loaded. Same for Onboard LAN OPROM and Onboard Video OPROM. And UEFI Network stack is enabled , as well as IPv4 PXE/IPv6 PXE support. Next thing is to modify boot settings. The usual boot order for PXE is to first add hard disk and second PXE network . The PXE tools (for example Ironic) will set a temporary boot order for PXE (one time) to enable the boot from network, but then the reboot will be done from hard disk. So be sure that your boot order matches the following: See that the first order is hard d

Test API endpoint with netcat

Do you need a simple way to validate that an API endpoint is responsive, but you don't want to use curl? There is a simple way to validate the endpoint with nc, producing an output that can be redirected to a logfile and parsed later: URL=$1 PORT=$2 while true; do     RESULT=$(nc -vz $URL $PORT 2>&1)     DATE=$(date)     echo $DATE $RESULT     sleep 1 done You can all this script with the API URL as first parameter, and API port as the second. netcat will be accessing to that endpoint and will report the results, detecting when the API is down. We also can output the date to have a reference when failures are detected. The produced output will be something like: vie jun 26 08:19:28 UTC 2020 Ncat: Version 7.70 ( https://nmap.org/ncat ) Ncat: Connected to 192.168.111.3:6443. Ncat: 0 bytes sent, 0 bytes received in 0.01 seconds. vie jun 26 08:19:29 UTC 2020 Ncat: Version 7.70 ( https://nmap.org/ncat ) Ncat: Connected to 192.168.111.3:6443. Ncat: 0 bytes sent, 0 bytes

Setup an NFS client provisioner in Kubernetes

Setup an NFS client provisioner in Kubernetes One of the most common needs when deploying Kubernetes is the ability to use shared storage. While there are several options available, one of the most commons and easier to setup is to use an NFS server. This post will explain how to setup a dynamic NFS client provisioner on Kubernetes, relying on an existing NFS server on your systems. Step 1. Setup an NFS server (sample for CentOS) First thing you will need, of course, is to have an NFS server. This can be easily achieved with some easy steps: Install nfs package: yum install -y nfs-utils Enable and start nfs service and rpcbind: systemctl enable rpcbind systemctl enable nfs-server systemctl start rpcbind systemctl start nfs-server Create the directory that will be shared by NFS, and change the permissions: mkdir /var/nfsshare chmod -R 755 /var/nfsshare chown nfsnobody:nfsnobody /var/nfsshare  Share the NFS directory over the network, creating the /etc/exports file: vi /