Skip to main content

How to encrypt your home with guestfs

Continued from http://teknoarticles.blogspot.com.es/2016/12/start-using-whole-disk-images-with.html

For security reasons, there may be the need of encrypting several partitions of volumes on your images.
And you can have a pre-created image with that encryption on place, instead of having to do manually after boot. This can be done with guestfs and luks.

The following script will show how to perform that encryption and mount it automatically:

#!/usr/bin/env python
import binascii
import guestfs
import os

# remove old generated drive
try:
    os.unlink("/tmp/overcloud-full-partitioned.qcow2")
except:
    pass

g = guestfs.GuestFS(python_return_dict=True)

# import old and new images
print("Creating new repartitioned image")
g.add_drive_opts("/tmp/overcloud-full.qcow2", format="qcow2", readonly=1)
g.disk_create("/tmp/overcloud-full-partitioned.qcow2", "qcow2", 10 * 1024 * 1024 * 1024) #10G
g.add_drive_opts("/tmp/overcloud-full-partitioned.qcow2", format="qcow2", readonly=0)
g.launch()

# create the partitions for new image
print("Creating the initial partitions")
g.part_init("/dev/sdb", "mbr")
g.part_add("/dev/sdb", "primary", 2048, 616448)
g.part_add("/dev/sdb", "primary", 616449, -1)

g.pvcreate("/dev/sdb2")
g.vgcreate("vg", ['/dev/sdb2', ])
g.lvcreate("var", "vg", 4400)
g.lvcreate("tmp", "vg", 500)
g.lvcreate("swap", "vg", 250)
g.lvcreate("home", "vg", 100)
g.lvcreate("root", "vg", 4000)
g.part_set_bootable("/dev/sdb", 1, True)

# encrypt home partition and write keys
print("Encrypting volume")
random_content = binascii.b2a_hex(os.urandom(1024))
g.luks_format('/dev/vg/home', random_content, 0)

# open the encrypted volume
volumes = ['/dev/vg/var', '/dev/vg/tmp', '/dev/vg/swap', '/dev/mapper/cryptedhome', '/dev/vg/root']

g.luks_open('/dev/vg/home', random_content, 'cryptedhome')
g.vgscan()
g.vg_activate_all(True)

# add filesystems to volumes
print("Adding filesystems")
ids = {}
keys = [ 'var', 'tmp', 'swap', 'home', 'root' ]
swap_volume = volumes[2]

count = 0
for volume in volumes:
    if count!=2:
        g.mkfs('ext4', volume)
        if keys[count] == 'home':
            ids['home'] = g.vfs_uuid('/dev/vg/home')
        else:
            ids[keys[count]] = g.vfs_uuid(volume)
    count +=1

# create filesystem on boot and swap
g.mkfs('ext4', '/dev/sdb1')
g.mkswap_opts(volumes[2])
ids['swap'] = g.vfs_uuid(volumes[2])

# mount drives and copy content
print("Start copying content")
g.mkmountpoint('/old')
g.mkmountpoint('/root')
g.mkmountpoint('/boot')
g.mkmountpoint('/home')
g.mkmountpoint('/var')
g.mount('/dev/sda', '/old')

g.mount('/dev/sdb1', '/boot')
g.mount(volumes[4], '/root')
g.mount(volumes[3], '/home')
g.mount(volumes[0], '/var')

# copy content to root
results = g.ls('/old/')
for result in results:
    if result not in ('boot', 'home', 'tmp', 'var'):
        print("Copying %s to root" % result)
        g.cp_a('/old/%s' % result, '/root/')

# copy extra content
folders_to_copy = ['boot', 'home', 'var']
for folder in folders_to_copy:
    results = g.ls('/old/%s/' % folder)
    for result in results:
        print("Copying %s to %s" % (result, folder))
        g.cp_a('/old/%s/%s' % (folder, result),
               '/%s/' % folder)

# write keyfile for encrypted volume
g.write('/root/root/home_keyfile', random_content)
g.chmod(0400, '/root/root/home_keyfile')

# generate mapper for encrypted home
mapper = """
home UUID={home_id} /root/home_keyfile
""".format(home_id=ids['home'])
g.write('/root/etc/crypttab', mapper)

# create /etc/fstab file
print("Generating fstab content")
fstab_content = """
UUID={boot_id} /boot ext4 defaults 1 2
UUID={root_id} / ext4 defaults 1 1
UUID={swap_id} none swap sw 0 0
UUID={tmp_id} /tmp ext4 defaults 1 2
UUID={var_id} /var ext4 defaults 1 2
/dev/mapper/home /home ext4 defaults 1 2
""".format(
    boot_id=g.vfs_uuid('/dev/sdb1'),
    root_id=ids['root'],
    swap_id=ids['swap'],
    tmp_id=ids['tmp'],
    home_id=ids['home'],
    var_id=ids['var'])

g.write('/root/etc/fstab', fstab_content)

# umount filesystems
g.umount('/root')
g.umount('/boot')
g.umount('/old')
g.umount('/var')
g.umount('/home')

# close encrypted volume
g.luks_close('/dev/mapper/cryptedhome')

# mount in the right directories to install bootloader
print("Installing bootloader")
g.mount(volumes[4], '/')
g.mkdir('/boot')
g.mkdir('/var')
g.mount('/dev/sdb1', '/boot')
g.mount(volumes[0], '/var')





# add rd.auto=1 on grub parameters
g.sh('sed  -i "s/.*GRUB_CMDLINE_LINUX.*/GRUB_CMDLINE_LINUX=\\"console=tty0 crashkernel=auto rd.auto=1\\"/" /etc/default/grub')

g.sh('grub2-install --target=i386-pc /dev/sdb')
g.sh('grub2-mkconfig -o /boot/grub2/grub.cfg')

# create dracut.conf file
dracut_content = """
add_dracutmodules+="lvm crypt"
"""
g.write('/etc/dracut.conf', dracut_content)

# update initramfs to include lvm and crypt
kernels = g.ls('/lib/modules')
for kernel in kernels:
    print("Updating dracut to include modules in kernel %s" % kernel)
    g.sh('dracut -f /boot/initramfs-%s.img %s --force' % (kernel, kernel))

# do a selinux relabel
g.selinux_relabel('/etc/selinux/targeted/contexts/files/file_contexts', '/', force=True)

g.selinux_relabel('/etc/selinux/targeted/contexts/files/file_contexts', '/var', force=True)


g.umount('/boot')
g.umount('/var')
g.umount('/')

# close images
print("Finishing image")
g.shutdown()
g.close()


The script performed the following actions:
  • Create a volume for home
  • Generate a random key, that will be used for encrypting the home volume
  • Using that random key, encrypt the home volume using luks_format
  • Once that volume is formatted, open it using the generated key before, using the luks_open command
  • Create the filesystem and add the desired content to the home volume
  • Copy the generated key content to a file in /root, so it can be accessed when booting the system, to automatically decrypt the volume. For security, this key needs to be owned by root, and have 0400 permissions (read-only for root)
  • Generate a /etc/crypttab file, that will be the one used to referenced the encrypted volume. This file needs to have the format:
    mapping_name     UUID=<uuid for home volume>     <path_to_key>
  • Reference that mapping in /etc/fstab file. So instead of mounting the device with the UUID, do it with the following syntax
    /dev/mapper/<mapping_name> /home ext4 defaults 1 2
  • Unmount all the volumes, and close the encrypted one with luks_close
  • If needed, enable crypt and lvm modules on your ramdisk, using dracut commands to regenerate it
  • At this point, the encrypted partition will be automatically decrypted using the desired key on boot time, and will be available to use without manual intervention.
Please note that for extra security, the most secure place to store the encryption key is a removable device, such as an USB disk. This sample worked as a proof of concept, but to provide extra hardening to your system, you shall update it to write key in your removable device, and update mapper to reference that disk instead.

Comments

Popular posts from this blog

Setup an NFS client provisioner in Kubernetes

Setup an NFS client provisioner in Kubernetes One of the most common needs when deploying Kubernetes is the ability to use shared storage. While there are several options available, one of the most commons and easier to setup is to use an NFS server.
This post will explain how to setup a dynamic NFS client provisioner on Kubernetes, relying on an existing NFS server on your systems.
Step 1. Setup an NFS server (sample for CentOS) First thing you will need, of course, is to have an NFS server. This can be easily achieved with some easy steps:

Install nfs package: yum install -y nfs-utils Enable and start nfs service and rpcbind:
systemctl enable rpcbind
systemctl enable nfs-server
systemctl start rpcbind
systemctl start nfs-server
Create the directory that will be shared by NFS, and change the permissions:
mkdir /var/nfsshare
chmod -R 755 /var/nfsshare
chown nfsnobody:nfsnobody /var/nfsshare
 Share the NFS directory over the network, creating the /etc/exports file:
vi /etc/exports
/var/nfsshare …

Create and restore external backups of virtual machines with libvirt

A common need for deployments in production, is to have the possibility of taking backups of your working virtual machines, and export them to some external storage.
Although libvirt offers the possibility of taking snapshots and restore them, those snapshots are intended to be managed locally, and are lost when you destroy your virtual machines.
There may be the need to just trash all your environment, and re-create the virtual machines from an external backup, so this article offers a procedure to achieve it.
First step, create an external snapshot So the first step will be taking an snapshot from your running vm. The best way to take an isolated backup is using blockcopy virsh command. So, how to proceed?

1. First you need to extract all the disks that your vm has. This can be achieved with domblklist command:
DISK_NAME=$(virsh domblklist {{domain}} --details | grep 'disk' | awk '{print $3}')

This will extract the name of the device that the vm is using (vda, hda, et…

Automating local mirrors creation in RHEL

Sometimes there is a need to consume RHEL mirrors locally, not using the Red Hat content delivery network. It may be needed to speed up some deployment, or due to network constraints.

I create an ansible playbook, rhel-local-mirrors (https://github.com/redhat-nfvpe/rhel-local-mirrors), that can help with that.
What does rhel-local-mirrors do? It is basically a tool that connects to the Red Hat CDN, and syncs the repositories locally, allowing to populate the desired mirrors, that can be accessed by other systems via HTTP.

The playbook is performing several tasks, that can be run together or independently:
register a system on the Red Hat Networkprepare the system to host mirrorscreate the specified mirrorsschedule automatic updates of the mirrors How to use it?It is an Ansible playbook, so start by installing it, in any prefered format. Then continue by cloning the playbook:
git clone https://github.com/redhat-nfvpe/rhel-local-mirrors.gitThis playbook expects a group of servers called