Skip to main content

Security hardened images with volumes

Starting to apply since Queens

This article is a continuation of http://teknoarticles.blogspot.com.es/2017/07/build-and-use-security-hardened-images.html

How to build the security hardened image with volumes

Starting since Queens, security hardened images can be built using volumes. This will have the advantage of more flexibility when resizing the different filesystems.

The process of building the security hardened image is the same as in the previous blogpost. But there have been a change in how the partitions, volumes and filesystems are defined. Now there is a pre-defined partition of 20G, and then volumes are created under it. Volume sizes are created on percentages, not in absolute size,:
  • /              -> 30% (over 6G)
  • /tmp           -> 5% (over 1G)
  • /var           -> 35% (over 7G)
  • /var/log       -> 25% (over 5G)
  • /var/log/audit -> 4% (over 0.8G)
  • /home          -> 1% (over 0.2G)
With that new layout based on volumes, you have now two options for resizing, to use all the disk space and not be restricted to this 20G : building an image with the right size, or resizing after deployment

Build an image with the right size

It will work in the same way, you will need to modify the partitioning schema and increase image size.

1. Modify partitioning schema

To modify the partitioning schema, either to alter the partitioning size or create/remove existing partitions, you need to execute: 

export DIB_BLOCK_DEVICE_CONFIG='<yaml_schema_with_partitions>'


Before executing the openstack overcloud image build command. The current YAML used to produce the security hardened image is the following, so you can reuse and update the sizes of the partitions and volumes as needed:


export DIB_BLOCK_DEVICE_CONFIG='''
- local_loop:
  name: image0
- partitioning:
  base: image0

  label: mbr
  partitions:
    - name: root
      flags: [ boot,primary ]
      size: 20G
- lvm:
    name: lvm
    base: [ root ]
    pvs:
      - name: pv
        base: root
        options: [ "--force" ]
    vgs:
      - name: vg
        base: [ "pv" ]
        options: [ "--force" ]
    lvs:
      - name: lv_root
        base: vg
        extents: 30%VG
      - name: lv_tmp
        base: vg
        extents: 5%VG
      - name: lv_var
        base: vg
        extents: 35%VG
      - name: lv_log
        base: vg
        extents: 25%VG
      - name: lv_audit
        base: vg
        extents: 4%VG
      - name: lv_home
        base: vg
        extents: 1%VG
- mkfs:
    name: fs_root
    base: lv_root
    type: xfs
    label: "img-rootfs"
    mount:
      mount_point: /
      fstab:
        options: "rw,relatime"
        fck-passno: 1
- mkfs:
    name: fs_tmp
    base: lv_tmp
    type: xfs
    mount:
      mount_point: /tmp
      fstab:
        options: "rw,nosuid,nodev,noexec,relatime"
- mkfs:
    name: fs_var
    base: lv_var
    type: xfs
    mount:
      mount_point: /var
      fstab:
        options: "rw,relatime"
- mkfs:
    name: fs_log
    base: lv_log
    type: xfs
    mount:
      mount_point: /var/log
      fstab:
        options: "rw,relatime"
- mkfs:
    name: fs_audit
    base: lv_audit
    type: xfs
    mount:
      mount_point: /var/log/audit
      fstab:
        options: "rw,relatime"
- mkfs:
    name: fs_home
    base: lv_home
    type: xfs
    mount:
      mount_point: /home
      fstab:
        options: "rw,nodev,relatime"

'''


For a reference about the YAML schema, please visit https://docs.openstack.org/developer/diskimage-builder/user_guide/building_an_image.html

2. Update image size

Once you modify the partitioning and volume schema, you may need to update the size of the generated image, because the global sum of partition sizes may exceed the one by default (20G). To modify the image size, you may need to update the config files generated to produce the image.
To achieve this , you need to make a copy of the /usr/share/openstack-tripleo-common/image-yaml/overcloud-hardened-images.yaml:

cp /usr/share/openstack-tripleo-common/image-yaml/overcloud-hardened-images.yaml /home/stack/overcloud-hardened-images-custom.yaml

Then you may need to edit the DIB_IMAGE_SIZE setting contained there, to give the right value to it:

...

environment:
DIB_PYTHON_VERSION: '2'
DIB_MODPROBE_BLACKLIST: 'usb-storage cramfs freevxfs jffs2 hfs hfsplus squashfs udf vfat bluetooth'
DIB_BOOTLOADER_DEFAULT_CMDLINE: 'nofb nomodeset vga=normal console=tty0 console=ttyS0,115200 audit=1 nousb'
DIB_IMAGE_SIZE: '20'
COMPRESS_IMAGE: '1'


After creating that new file, you can execute the image build command, pointing to that new generated file:

openstack overcloud image build --image-name overcloud-hardened-full --config-file /home/stack/overcloud-hardened-images-custom.yaml --config-file /usr/share/openstack-tripleo-common/image-yaml/overcloud-hardened-images-[centos7|rhel7].yaml

Resize after deployment

With the use of volumes, there is another option for resizing the filesystems, this time after deployment. When you finish deploying the image, you will have a partition with the fixed size, and volumes depending on it. Now you can create a partition using the remaining space:

[root@overcloud-controller-0 heat-admin]# fdisk /dev/vda
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.


Command (m for help): m
Command action
   a   toggle a bootable flag
   b   edit bsd disklabel
   c   toggle the dos compatibility flag
   d   delete a partition
   g   create a new empty GPT partition table
   G   create an IRIX (SGI) partition table
   l   list known partition types
   m   print this menu
   n   add a new partition
   o   create a new empty DOS partition table
   p   print the partition table
   q   quit without saving changes
   s   create a new empty Sun disklabel
   t   change a partition's system id
   u   change display/entry units
   v   verify the partition table
   w   write table to disk and exit
   x   extra functionality (experts only)

Command (m for help): n
Partition type:
   p   primary (2 primary, 0 extended, 2 free)
   e   extended
Select (default p): p
Partition number (3,4, default 3): 3
First sector (39064548-104857599, default 39065600):
Using default value 39065600
Last sector, +sectors or +size{K,M,G} (39065600-104726527, default 104726527):
Using default value 104726527
Partition 3 of type Linux and of size 31.3 GiB is set

Command (m for help): w
The partition table has been altered!


With that partition, you can create a physical volume, and grow the existing volume group where the initial volumes were created:

[root@overcloud-controller-0 heat-admin]# pvcreate /dev/vda3
  Physical volume "/dev/vda3" successfully created.
[root@overcloud-controller-0 heat-admin]# vgs
  VG #PV #LV #SN Attr   VSize  VFree
  vg   1   6   0 wz--n- 18.62g 12.00m
[root@overcloud-controller-0 heat-admin]# vgextend /dev/vg /dev/vda3
  Volume group "vg" successfully extended
[root@overcloud-controller-0 heat-admin]# vgdisplay
  --- Volume group ---
  VG Name               vg
  System ID            
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  8
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                6
  Open LV               6
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size               49.93 GiB
  PE Size               4.00 MiB
  Total PE              12783
  Alloc PE / Size       4765 / 18.61 GiB
  Free  PE / Size       8018 / 31.32 GiB
  VG UUID               54wXr9-pjcT-cyBT-FZ9r-B8u2-2AlU-JhxA4Y

Now with more space on the volume group, logical volumes can be extended as well:

[root@overcloud-controller-0 heat-admin]# lvdisplay /dev/vg/lv_root
  --- Logical volume ---
  LV Path                /dev/vg/lv_root
  LV Name                lv_root
  VG Name                vg
  LV UUID                ihoFT4-Q3XO-Nu5M-BAqt-oqDF-WEYW-xT9NxL
  LV Write Access        read/write
  LV Creation host, time test-dib.rdocloud, 2017-10-20 11:10:02 -0400
  LV Status              available
  # open                 1
  LV Size                <5.59 GiB
  Current LE             1430
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:0
  
[root@overcloud-controller-0 heat-admin]# lvextend -l +4000 /dev/vg/lv_root
  Size of logical volume vg/lv_root changed from <5.59 GiB (1430 extents) to 21.21 GiB (5430 extents).
  Logical volume vg/lv_root successfully resized.


Then the final step will be to resize the filesystem:

 xfs_growfs /
meta-data=/dev/mapper/vg-lv_root isize=512    agcount=4, agsize=366080 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0 spinodes=0
data     =                       bsize=4096   blocks=1464320, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal               bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 1464320 to 5560320


You can proceed with that resize process for all volumes needed.

How to upload the security hardened image

For uploading the security hardened image, please proceed with the same steps, documented in the previous post.

Comments

Popular posts from this blog

Enable UEFI PXE boot in Supermicro SYS-E200

When provisioning my Supermicro SYS-E200-8D machines (X10 motherboard), i had the need to enable UEFI boot mode, and provision through PXE. This may seem straightforward, but there is a set of BIOS settings that need to be changed in order to enable it. First thing is to enable EFI on LAN , and enable Network Stack. To do that, enter into BIOS > Advanced > PCIe/PCI/PnP configuration and check that your settings match the following: See that PCI-E have EFI firmware loaded. Same for Onboard LAN OPROM and Onboard Video OPROM. And UEFI Network stack is enabled , as well as IPv4 PXE/IPv6 PXE support. Next thing is to modify boot settings. The usual boot order for PXE is to first add hard disk and second PXE network . The PXE tools (for example Ironic) will set a temporary boot order for PXE (one time) to enable the boot from network, but then the reboot will be done from hard disk. So be sure that your boot order matches the following: See that the first order is hard d

Test API endpoint with netcat

Do you need a simple way to validate that an API endpoint is responsive, but you don't want to use curl? There is a simple way to validate the endpoint with nc, producing an output that can be redirected to a logfile and parsed later: URL=$1 PORT=$2 while true; do     RESULT=$(nc -vz $URL $PORT 2>&1)     DATE=$(date)     echo $DATE $RESULT     sleep 1 done You can all this script with the API URL as first parameter, and API port as the second. netcat will be accessing to that endpoint and will report the results, detecting when the API is down. We also can output the date to have a reference when failures are detected. The produced output will be something like: vie jun 26 08:19:28 UTC 2020 Ncat: Version 7.70 ( https://nmap.org/ncat ) Ncat: Connected to 192.168.111.3:6443. Ncat: 0 bytes sent, 0 bytes received in 0.01 seconds. vie jun 26 08:19:29 UTC 2020 Ncat: Version 7.70 ( https://nmap.org/ncat ) Ncat: Connected to 192.168.111.3:6443. Ncat: 0 bytes sent, 0 bytes

Create and restore external backups of virtual machines with libvirt

A common need for deployments in production, is to have the possibility of taking backups of your working virtual machines, and export them to some external storage. Although libvirt offers the possibility of taking snapshots and restore them, those snapshots are intended to be managed locally, and are lost when you destroy your virtual machines. There may be the need to just trash all your environment, and re-create the virtual machines from an external backup, so this article offers a procedure to achieve it. First step, create an external snapshot So the first step will be taking an snapshot from your running vm. The best way to take an isolated backup is using blockcopy virsh command. So, how to proceed? 1. First you need to extract all the disks that your vm has. This can be achieved with domblklist command:   DISK_NAME=$(virsh domblklist {{domain}} --details | grep 'disk' | awk '{print $3}') This will extract the name of the device that the vm is using