Skip to main content

Deploying and upgrading TripleO with local mirrors

Continued from http://teknoarticles.blogspot.com.es/2017/08/automating-local-mirrors-creation-in.html

In the previous blogpost, I explained how to automate the RHEL mirror creation using https://github.com/redhat-nfvpe/rhel-local-mirrors. Now we are going to learn how to deploy and upgrade TripleO using those.

Deploying TripleO

Undercloud

To use local mirrors in the undercloud, you simply need to get the generated osp<version>.repo that you generated with the rhel-local-mirrors playbook, and copy it to /etc/yum.repos.d/ , in the undercloud host:
sudo curl http://<local_mirror_ip>/osp<version>_repo/osp<version>.repo \
-o /etc/yum.repos.d/osp.repo
Then proceed with the standard instructions for deploy.

Overcloud

Each node from the overcloud (controllers, computes, etc...) needs to have a copy of the repository file from our server where we host the local mirrors. To achieve it, you can include an script that downloads the osp<version>.repo file when deploying.
This can be achieved with Heat templates:
heat_template_version: 2014-10-16
description: >
  File for downloading the repository file from the asset server
resources:
  userdata:
    type: OS::Heat::MultipartMime
    properties:
      parts:
      - config: {get_resource: assetrepo_config}
  assetrepo_config:
    type: OS::Heat::SoftwareConfig
    properties:
      config: |
        #!/bin/bash
        sudo curl http://<local_mirror_ip>/osp<version>_repo/osp<version>.repo -o /etc/yum.repos.d/osp<version>.repo
outputs:
  OS::stack_id:
    value: {get_resource: userdata}

And then creating the environment file that will reference it:
resource_registry:
  OS::TripleO::NodeUserData: /home/stack/templates/asset-repo.yaml

Then you need to include that environment file in your deploy command:
openstack overcloud deploy --templates  -e ~/templates/asset-environment.yaml \
[OTHER OPTIONS]

Upgrading TripleO

Undercloud

To upgrade the undercloud, you need to be sure to disable packages from previous versions, and enable the ones from the new version.
If using local mirrors you may need to remove  the older repo file:
sudo rm /etc/yum.repos.d/osp.repo
And create the new:
sudo curl http://<local_mirror_ip>/osp<version+1>_repo/osp<version+1>.repo -o /etc/yum.repos.d/osp.repo

After that, execute the yum update and openstack undercloud upgrade commands as usual.

Overcloud

To upgrade the overcloud, you need to disable the older repositories and enable the new ones at upgrade time. This is achieved by a parameter that is called UpgradeInitCommand. This needs to contain a custom bash script, that will disable the older repos and enable the new ones, based on your needs.
A sample environment using UpgradeInitCommand can be:
cat > overcloud-repos.yaml <<EOF
parameter_defaults:
  UpgradeInitCommand: |
    set -e
    # REPOSITORY SWITCH COMMANDS GO HERE 
    sudo rm /etc/yum.repos.d/osp.repo
    sudo curl http://<local_mirror_ip>/osp<version+1>_repo/osp<version+1>.repo -o /etc/yum.repos.d/osp.repo 
    EOF

And then execute your upgrade command including that overcloud-repos.yaml file:
openstack overcloud deploy --templates \
    -e <full environment> \
    -e /usr/share/openstack-tripleo-heat-templates/environments/major-upgrade-composable-steps.yaml \
    -e overcloud-repos.yaml
 

Comments

Popular posts from this blog

Build and use security hardened images with TripleO

Starting to apply since Pike Concept of security hardened images Normally the images used for overcloud deployment in TripleO are not security hardened. It means, the images lack all the extra security measures needed to accomplish with ANSSI requirements. These extra measures are needed to deploy TripleO in environments where security is an important feature.
The following recommendations are given to accomplish with security guidelines:
ensure that /tmp is mounted on a separate volume or partition, and that it is mounted with rw,nosuid,nodev,noexec,relatime flagsensure that /var, /var/log and /var/log/audit are mounted on separates volumes or partitions, and that are mounted with rw,relatime flags.ensure that /home is mounted on a separate partition or volume, and that it is mounted with rw,nodev,relatime flags.include extra kernel boot flag to enable auditing: add audit=1 to GRUB_CMDLINE_LINUX settingdisable kernel support for USB via bootloader configuration: add nousb to GRUB_CMD…

How to deploy TripleO Queens without external network

TripleO Queens has an interesting feature that is called 'composable networks'. It allows to deploy Openstack with the choice of networks that you want, depending on your environment. Please see: https://docs.openstack.org/tripleo-docs/latest/install/advanced_deployment/custom_networks.html

By default, the following networks are defined:
StorageStorage ManagementInternal ApiTenantManagementExternal The external network allows to reach the endpoints externally, and also to define networks to reach the vms externally as well. But to have that, it is needed to have a network with external access, routable, on your lab. Not all labs have it, specially for CI environments, so it may be useful to deploy without it, and just have internal access to endpoints and vms. In this blogpost i'm just going to explain how to achieve it.

First make a copy of your original tripleo-heat-templates, to another directory /home/stack/working-templates, and edit the following files:
network_data.…

Setup an NFS client provisioner in Kubernetes

Setup an NFS client provisioner in Kubernetes One of the most common needs when deploying Kubernetes is the ability to use shared storage. While there are several options available, one of the most commons and easier to setup is to use an NFS server.
This post will explain how to setup a dynamic NFS client provisioner on Kubernetes, relying on an existing NFS server on your systems.
Step 1. Setup an NFS server (sample for CentOS) First thing you will need, of course, is to have an NFS server. This can be easily achieved with some easy steps:

Install nfs package: yum install -y nfs-utils Enable and start nfs service and rpcbind:
systemctl enable rpcbind
systemctl enable nfs-server
systemctl start rpcbind
systemctl start nfs-server
Create the directory that will be shared by NFS, and change the permissions:
mkdir /var/nfsshare
chmod -R 755 /var/nfsshare
chown nfsnobody:nfsnobody /var/nfsshare
 Share the NFS directory over the network, creating the /etc/exports file:
vi /etc/exports
/var/nfsshare …