Skip to main content

Posts

Setup an NFS client provisioner in Kubernetes

Setup an NFS client provisioner in Kubernetes One of the most common needs when deploying Kubernetes is the ability to use shared storage. While there are several options available, one of the most commons and easier to setup is to use an NFS server.
This post will explain how to setup a dynamic NFS client provisioner on Kubernetes, relying on an existing NFS server on your systems.
Step 1. Setup an NFS server (sample for CentOS) First thing you will need, of course, is to have an NFS server. This can be easily achieved with some easy steps:

Install nfs package: yum install -y nfs-utils Enable and start nfs service and rpcbind:
systemctl enable rpcbind
systemctl enable nfs-server
systemctl start rpcbind
systemctl start nfs-server
Create the directory that will be shared by NFS, and change the permissions:
mkdir /var/nfsshare
chmod -R 755 /var/nfsshare
chown nfsnobody:nfsnobody /var/nfsshare
 Share the NFS directory over the network, creating the /etc/exports file:
vi /etc/exports
/var/nfsshare …
Recent posts

How to deploy TripleO Queens without external network

TripleO Queens has an interesting feature that is called 'composable networks'. It allows to deploy Openstack with the choice of networks that you want, depending on your environment. Please see: https://docs.openstack.org/tripleo-docs/latest/install/advanced_deployment/custom_networks.html

By default, the following networks are defined:
StorageStorage ManagementInternal ApiTenantManagementExternal The external network allows to reach the endpoints externally, and also to define networks to reach the vms externally as well. But to have that, it is needed to have a network with external access, routable, on your lab. Not all labs have it, specially for CI environments, so it may be useful to deploy without it, and just have internal access to endpoints and vms. In this blogpost i'm just going to explain how to achieve it.

First make a copy of your original tripleo-heat-templates, to another directory /home/stack/working-templates, and edit the following files:
network_data.…

Automated TripleO upgrades

Upgrading TripleO can be a hard task. While there are instructions on how to do it manually, having a set of playbooks that automate this task can help.
With this purpose, I've created the TripleO upgrade automation playbooks (https://github.com/redhat-nfvpe/tripleo-upgrade-automation).
Those are a set of playbooks that allow to upgrade an existing TripleO deployment, specially focused on versions from 8 to 10, and integrated with local mirrors (https://github.com/redhat-nfvpe/rhel-local-mirrors)

In case you want to know more, please visit the tripleo-upgrade-automation project on github, and you'll get instructions on how to properly use this repo to automate your upgrades.

Security hardened images with volumes

Starting to apply since QueensThis article is a continuation of http://teknoarticles.blogspot.com.es/2017/07/build-and-use-security-hardened-images.html How to build the security hardened image with volumes Starting since Queens, security hardened images can be built using volumes. This will have the advantage of more flexibility when resizing the different filesystems.

The process of building the security hardened image is the same as in the previous blogpost. But there have been a change in how the partitions, volumes and filesystems are defined. Now there is a pre-defined partition of 20G, and then volumes are created under it. Volume sizes are created on percentages, not in absolute size,:
/              -> 30% (over 6G)/tmp           -> 5% (over 1G)/var           -> 35% (over 7G)/var/log       -> 25% (over 5G)/var/log/audit -> 4% (over 0.8G)/home          -> 1% (over 0.2G) With that new layout based on volumes, you have now two options for resizing, to use all th…

Customize OpenStack images for booting from ISCSI

When working with OpenStack Ironic and Tripleo, and using the boot from ISCSI feature, you may need to add some kernel parameters into the deployment image for that to work.
When using some specific hardware, you may need that the deployment image contains some specific kernel parameters on boot. For example, when trying to boot from ISCSI with IBFT nics, you need to add following kernel parameters:

rd.iscsi.ibft=1 rd.iscsi.firmware=1 

 The TripleO image that is generated by default doesn't contain those parameters, because they are very specific depending on the hardware you need. It is not also possible right now to send this parameters through Ironic.

The solution will be to customize the deployment image to add these kernel parameters. The overcloud-full.qcow2 image that comes by default with TripleO is a partition image. It means that the bootloader is not previously installed, but it is done through Ironic. So the way to add custom parameters, is modifying the /etc/default/gr…

Deploying and upgrading TripleO with local mirrors

Continued from http://teknoarticles.blogspot.com.es/2017/08/automating-local-mirrors-creation-in.html

In the previous blogpost, I explained how to automate the RHEL mirror creation using https://github.com/redhat-nfvpe/rhel-local-mirrors. Now we are going to learn how to deploy and upgrade TripleO using those.
Deploying TripleO Undercloud To use local mirrors in the undercloud, you simply need to get the generated osp<version>.repo that you generated with the rhel-local-mirrors playbook, and copy it to /etc/yum.repos.d/ , in the undercloud host:
sudo curl http://<local_mirror_ip>/osp<version>_repo/osp<version>.repo \ -o /etc/yum.repos.d/osp.repo Then proceed with the standard instructions for deploy.
Overcloud Each node from the overcloud (controllers, computes, etc...) needs to have a copy of the repository file from our server where we host the local mirrors. To achieve it, you can include an script that downloads the osp<version>.repo file when deployi…

Automating local mirrors creation in RHEL

Sometimes there is a need to consume RHEL mirrors locally, not using the Red Hat content delivery network. It may be needed to speed up some deployment, or due to network constraints.

I create an ansible playbook, rhel-local-mirrors (https://github.com/redhat-nfvpe/rhel-local-mirrors), that can help with that.
What does rhel-local-mirrors do? It is basically a tool that connects to the Red Hat CDN, and syncs the repositories locally, allowing to populate the desired mirrors, that can be accessed by other systems via HTTP.

The playbook is performing several tasks, that can be run together or independently:
register a system on the Red Hat Networkprepare the system to host mirrorscreate the specified mirrorsschedule automatic updates of the mirrors How to use it?It is an Ansible playbook, so start by installing it, in any prefered format. Then continue by cloning the playbook:
git clone https://github.com/redhat-nfvpe/rhel-local-mirrors.gitThis playbook expects a group of servers called