Skip to main content

Posts

Configure and measure realtime workers performance on OpenShift

Depending on your workloads, you may want to have workers with realtime kernel in your cluster.
How to configure and how to enroll the nodes into an Openshift cluster is not on the scope of this article, we will assume that you have the worker node up and running with realtime kernel.
Once you have it, how to properly schedule workloads that make use of it?
CPU manager and kubelet static policy - https://docs.openshift.com/container-platform/4.2/scalability_and_performance/using-cpu-manager.html CPU Manager manages groups of CPUs and constrains workloads to specific CPUs. This is useful for several cases, in our case for low-latency applications. In order to enable CPU manager following steps are needed:
Label a node with cpu manager:
# oc label node perf-node.example.com cpumanager=trueEdit the machineconfigpool worker, and add a label to reference a custom kubelet:
# oc edit machineconfigpool worker metadata: creationTimestamp: 2019-xx-xxx generation: 3 labels: custom-kubelet…
Recent posts

Add RHEL8 nodes to OpenShift deployments

This blogpost is going to show how to automatically enroll RHEL 8 (realtime) to Openshift 4.1 deployments, using UPI method.
This assumes that you will setup an OpenShift cluster using UPI, following the according documentation: https://docs.openshift.com/container-platform/4.1/installing/installing_bare_metal
The procedure on how to spin up this cluster in a semi-automated way is also shown at https://github.com/redhat-nfvpe/upi-rt . This article will assume that you have this UPI cluster up and running.

Enroll RHEL 8 nodes as workers By default, all nodes added into an OpenShift cluster are based on RHCOS. But there are use caes where you may need RHEL nodes. This is the case of RT (real time) nodes, where you need an specific kernel.
This can be achieved with the help of kickstart, and some specific configuration of PXE kernel args (in this case achieved with matchbox).
We are going to use RHEL8 images, booted with PXE, but we are going to add some specific configuration to allow t…

Create and restore external backups of virtual machines with libvirt

A common need for deployments in production, is to have the possibility of taking backups of your working virtual machines, and export them to some external storage.
Although libvirt offers the possibility of taking snapshots and restore them, those snapshots are intended to be managed locally, and are lost when you destroy your virtual machines.
There may be the need to just trash all your environment, and re-create the virtual machines from an external backup, so this article offers a procedure to achieve it.
First step, create an external snapshot So the first step will be taking an snapshot from your running vm. The best way to take an isolated backup is using blockcopy virsh command. So, how to proceed?

1. First you need to extract all the disks that your vm has. This can be achieved with domblklist command:
DISK_NAME=$(virsh domblklist {{domain}} --details | grep 'disk' | awk '{print $3}')

This will extract the name of the device that the vm is using (vda, hda, et…

Setup an NFS client provisioner in Kubernetes

Setup an NFS client provisioner in Kubernetes One of the most common needs when deploying Kubernetes is the ability to use shared storage. While there are several options available, one of the most commons and easier to setup is to use an NFS server.
This post will explain how to setup a dynamic NFS client provisioner on Kubernetes, relying on an existing NFS server on your systems.
Step 1. Setup an NFS server (sample for CentOS) First thing you will need, of course, is to have an NFS server. This can be easily achieved with some easy steps:

Install nfs package: yum install -y nfs-utils Enable and start nfs service and rpcbind:
systemctl enable rpcbind
systemctl enable nfs-server
systemctl start rpcbind
systemctl start nfs-server
Create the directory that will be shared by NFS, and change the permissions:
mkdir /var/nfsshare
chmod -R 755 /var/nfsshare
chown nfsnobody:nfsnobody /var/nfsshare
 Share the NFS directory over the network, creating the /etc/exports file:
vi /etc/exports
/var/nfsshare …

How to deploy TripleO Queens without external network

TripleO Queens has an interesting feature that is called 'composable networks'. It allows to deploy Openstack with the choice of networks that you want, depending on your environment. Please see: https://docs.openstack.org/tripleo-docs/latest/install/advanced_deployment/custom_networks.html

By default, the following networks are defined:
StorageStorage ManagementInternal ApiTenantManagementExternal The external network allows to reach the endpoints externally, and also to define networks to reach the vms externally as well. But to have that, it is needed to have a network with external access, routable, on your lab. Not all labs have it, specially for CI environments, so it may be useful to deploy without it, and just have internal access to endpoints and vms. In this blogpost i'm just going to explain how to achieve it.

First make a copy of your original tripleo-heat-templates, to another directory /home/stack/working-templates, and edit the following files:
network_data.…

Automated TripleO upgrades

Upgrading TripleO can be a hard task. While there are instructions on how to do it manually, having a set of playbooks that automate this task can help.
With this purpose, I've created the TripleO upgrade automation playbooks (https://github.com/redhat-nfvpe/tripleo-upgrade-automation).
Those are a set of playbooks that allow to upgrade an existing TripleO deployment, specially focused on versions from 8 to 10, and integrated with local mirrors (https://github.com/redhat-nfvpe/rhel-local-mirrors)

In case you want to know more, please visit the tripleo-upgrade-automation project on github, and you'll get instructions on how to properly use this repo to automate your upgrades.

Security hardened images with volumes

Starting to apply since QueensThis article is a continuation of http://teknoarticles.blogspot.com.es/2017/07/build-and-use-security-hardened-images.html How to build the security hardened image with volumes Starting since Queens, security hardened images can be built using volumes. This will have the advantage of more flexibility when resizing the different filesystems.

The process of building the security hardened image is the same as in the previous blogpost. But there have been a change in how the partitions, volumes and filesystems are defined. Now there is a pre-defined partition of 20G, and then volumes are created under it. Volume sizes are created on percentages, not in absolute size,:
/              -> 30% (over 6G)/tmp           -> 5% (over 1G)/var           -> 35% (over 7G)/var/log       -> 25% (over 5G)/var/log/audit -> 4% (over 0.8G)/home          -> 1% (over 0.2G) With that new layout based on volumes, you have now two options for resizing, to use all th…