Skip to main content

Security hardened images with volumes

Starting to apply since Queens

This article is a continuation of

How to build the security hardened image with volumes

Starting since Queens, security hardened images can be built using volumes. This will have the advantage of more flexibility when resizing the different filesystems.

The process of building the security hardened image is the same as in the previous blogpost. But there have been a change in how the partitions, volumes and filesystems are defined. Now there is a pre-defined partition of 20G, and then volumes are created under it. Volume sizes are created on percentages, not in absolute size,:
  • /              -> 30% (over 6G)
  • /tmp           -> 5% (over 1G)
  • /var           -> 35% (over 7G)
  • /var/log       -> 25% (over 5G)
  • /var/log/audit -> 4% (over 0.8G)
  • /home          -> 1% (over 0.2G)
With that new layout based on volumes, you have now two options for resizing, to use all the disk space and not be restricted to this 20G : building an image with the right size, or resizing after deployment

Build an image with the right size

It will work in the same way, you will need to modify the partitioning schema and increase image size.

1. Modify partitioning schema

To modify the partitioning schema, either to alter the partitioning size or create/remove existing partitions, you need to execute: 

export DIB_BLOCK_DEVICE_CONFIG='<yaml_schema_with_partitions>'

Before executing the openstack overcloud image build command. The current YAML used to produce the security hardened image is the following, so you can reuse and update the sizes of the partitions and volumes as needed:

- local_loop:
  name: image0
- partitioning:
  base: image0

  label: mbr
    - name: root
      flags: [ boot,primary ]
      size: 20G
- lvm:
    name: lvm
    base: [ root ]
      - name: pv
        base: root
        options: [ "--force" ]
      - name: vg
        base: [ "pv" ]
        options: [ "--force" ]
      - name: lv_root
        base: vg
        extents: 30%VG
      - name: lv_tmp
        base: vg
        extents: 5%VG
      - name: lv_var
        base: vg
        extents: 35%VG
      - name: lv_log
        base: vg
        extents: 25%VG
      - name: lv_audit
        base: vg
        extents: 4%VG
      - name: lv_home
        base: vg
        extents: 1%VG
- mkfs:
    name: fs_root
    base: lv_root
    type: xfs
    label: "img-rootfs"
      mount_point: /
        options: "rw,relatime"
        fck-passno: 1
- mkfs:
    name: fs_tmp
    base: lv_tmp
    type: xfs
      mount_point: /tmp
        options: "rw,nosuid,nodev,noexec,relatime"
- mkfs:
    name: fs_var
    base: lv_var
    type: xfs
      mount_point: /var
        options: "rw,relatime"
- mkfs:
    name: fs_log
    base: lv_log
    type: xfs
      mount_point: /var/log
        options: "rw,relatime"
- mkfs:
    name: fs_audit
    base: lv_audit
    type: xfs
      mount_point: /var/log/audit
        options: "rw,relatime"
- mkfs:
    name: fs_home
    base: lv_home
    type: xfs
      mount_point: /home
        options: "rw,nodev,relatime"


For a reference about the YAML schema, please visit

2. Update image size

Once you modify the partitioning and volume schema, you may need to update the size of the generated image, because the global sum of partition sizes may exceed the one by default (20G). To modify the image size, you may need to update the config files generated to produce the image.
To achieve this , you need to make a copy of the /usr/share/openstack-tripleo-common/image-yaml/overcloud-hardened-images.yaml:

cp /usr/share/openstack-tripleo-common/image-yaml/overcloud-hardened-images.yaml /home/stack/overcloud-hardened-images-custom.yaml

Then you may need to edit the DIB_IMAGE_SIZE setting contained there, to give the right value to it:


DIB_MODPROBE_BLACKLIST: 'usb-storage cramfs freevxfs jffs2 hfs hfsplus squashfs udf vfat bluetooth'
DIB_BOOTLOADER_DEFAULT_CMDLINE: 'nofb nomodeset vga=normal console=tty0 console=ttyS0,115200 audit=1 nousb'

After creating that new file, you can execute the image build command, pointing to that new generated file:

openstack overcloud image build --image-name overcloud-hardened-full --config-file /home/stack/overcloud-hardened-images-custom.yaml --config-file /usr/share/openstack-tripleo-common/image-yaml/overcloud-hardened-images-[centos7|rhel7].yaml

Resize after deployment

With the use of volumes, there is another option for resizing the filesystems, this time after deployment. When you finish deploying the image, you will have a partition with the fixed size, and volumes depending on it. Now you can create a partition using the remaining space:

[root@overcloud-controller-0 heat-admin]# fdisk /dev/vda
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Command (m for help): m
Command action
   a   toggle a bootable flag
   b   edit bsd disklabel
   c   toggle the dos compatibility flag
   d   delete a partition
   g   create a new empty GPT partition table
   G   create an IRIX (SGI) partition table
   l   list known partition types
   m   print this menu
   n   add a new partition
   o   create a new empty DOS partition table
   p   print the partition table
   q   quit without saving changes
   s   create a new empty Sun disklabel
   t   change a partition's system id
   u   change display/entry units
   v   verify the partition table
   w   write table to disk and exit
   x   extra functionality (experts only)

Command (m for help): n
Partition type:
   p   primary (2 primary, 0 extended, 2 free)
   e   extended
Select (default p): p
Partition number (3,4, default 3): 3
First sector (39064548-104857599, default 39065600):
Using default value 39065600
Last sector, +sectors or +size{K,M,G} (39065600-104726527, default 104726527):
Using default value 104726527
Partition 3 of type Linux and of size 31.3 GiB is set

Command (m for help): w
The partition table has been altered!

With that partition, you can create a physical volume, and grow the existing volume group where the initial volumes were created:

[root@overcloud-controller-0 heat-admin]# pvcreate /dev/vda3
  Physical volume "/dev/vda3" successfully created.
[root@overcloud-controller-0 heat-admin]# vgs
  VG #PV #LV #SN Attr   VSize  VFree
  vg   1   6   0 wz--n- 18.62g 12.00m
[root@overcloud-controller-0 heat-admin]# vgextend /dev/vg /dev/vda3
  Volume group "vg" successfully extended
[root@overcloud-controller-0 heat-admin]# vgdisplay
  --- Volume group ---
  VG Name               vg
  System ID            
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  8
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                6
  Open LV               6
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size               49.93 GiB
  PE Size               4.00 MiB
  Total PE              12783
  Alloc PE / Size       4765 / 18.61 GiB
  Free  PE / Size       8018 / 31.32 GiB
  VG UUID               54wXr9-pjcT-cyBT-FZ9r-B8u2-2AlU-JhxA4Y

Now with more space on the volume group, logical volumes can be extended as well:

[root@overcloud-controller-0 heat-admin]# lvdisplay /dev/vg/lv_root
  --- Logical volume ---
  LV Path                /dev/vg/lv_root
  LV Name                lv_root
  VG Name                vg
  LV UUID                ihoFT4-Q3XO-Nu5M-BAqt-oqDF-WEYW-xT9NxL
  LV Write Access        read/write
  LV Creation host, time test-dib.rdocloud, 2017-10-20 11:10:02 -0400
  LV Status              available
  # open                 1
  LV Size                <5.59 GiB
  Current LE             1430
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:0
[root@overcloud-controller-0 heat-admin]# lvextend -l +4000 /dev/vg/lv_root
  Size of logical volume vg/lv_root changed from <5.59 GiB (1430 extents) to 21.21 GiB (5430 extents).
  Logical volume vg/lv_root successfully resized.

Then the final step will be to resize the filesystem:

 xfs_growfs /
meta-data=/dev/mapper/vg-lv_root isize=512    agcount=4, agsize=366080 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0 spinodes=0
data     =                       bsize=4096   blocks=1464320, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal               bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 1464320 to 5560320

You can proceed with that resize process for all volumes needed.

How to upload the security hardened image

For uploading the security hardened image, please proceed with the same steps, documented in the previous post.


Popular posts from this blog

Setup an NFS client provisioner in Kubernetes

Setup an NFS client provisioner in Kubernetes One of the most common needs when deploying Kubernetes is the ability to use shared storage. While there are several options available, one of the most commons and easier to setup is to use an NFS server.
This post will explain how to setup a dynamic NFS client provisioner on Kubernetes, relying on an existing NFS server on your systems.
Step 1. Setup an NFS server (sample for CentOS) First thing you will need, of course, is to have an NFS server. This can be easily achieved with some easy steps:

Install nfs package: yum install -y nfs-utils Enable and start nfs service and rpcbind:
systemctl enable rpcbind
systemctl enable nfs-server
systemctl start rpcbind
systemctl start nfs-server
Create the directory that will be shared by NFS, and change the permissions:
mkdir /var/nfsshare
chmod -R 755 /var/nfsshare
chown nfsnobody:nfsnobody /var/nfsshare
 Share the NFS directory over the network, creating the /etc/exports file:
vi /etc/exports
/var/nfsshare …

Create and restore external backups of virtual machines with libvirt

A common need for deployments in production, is to have the possibility of taking backups of your working virtual machines, and export them to some external storage.
Although libvirt offers the possibility of taking snapshots and restore them, those snapshots are intended to be managed locally, and are lost when you destroy your virtual machines.
There may be the need to just trash all your environment, and re-create the virtual machines from an external backup, so this article offers a procedure to achieve it.
First step, create an external snapshot So the first step will be taking an snapshot from your running vm. The best way to take an isolated backup is using blockcopy virsh command. So, how to proceed?

1. First you need to extract all the disks that your vm has. This can be achieved with domblklist command:
DISK_NAME=$(virsh domblklist {{domain}} --details | grep 'disk' | awk '{print $3}')

This will extract the name of the device that the vm is using (vda, hda, et…

Automating local mirrors creation in RHEL

Sometimes there is a need to consume RHEL mirrors locally, not using the Red Hat content delivery network. It may be needed to speed up some deployment, or due to network constraints.

I create an ansible playbook, rhel-local-mirrors (, that can help with that.
What does rhel-local-mirrors do? It is basically a tool that connects to the Red Hat CDN, and syncs the repositories locally, allowing to populate the desired mirrors, that can be accessed by other systems via HTTP.

The playbook is performing several tasks, that can be run together or independently:
register a system on the Red Hat Networkprepare the system to host mirrorscreate the specified mirrorsschedule automatic updates of the mirrors How to use it?It is an Ansible playbook, so start by installing it, in any prefered format. Then continue by cloning the playbook:
git clone playbook expects a group of servers called