Skip to main content

Deploying and upgrading TripleO with local mirrors

Continued from http://teknoarticles.blogspot.com.es/2017/08/automating-local-mirrors-creation-in.html

In the previous blogpost, I explained how to automate the RHEL mirror creation using https://github.com/redhat-nfvpe/rhel-local-mirrors. Now we are going to learn how to deploy and upgrade TripleO using those.

Deploying TripleO

Undercloud

To use local mirrors in the undercloud, you simply need to get the generated osp<version>.repo that you generated with the rhel-local-mirrors playbook, and copy it to /etc/yum.repos.d/ , in the undercloud host:
sudo curl http://<local_mirror_ip>/osp<version>_repo/osp<version>.repo \
-o /etc/yum.repos.d/osp.repo
Then proceed with the standard instructions for deploy.

Overcloud

Each node from the overcloud (controllers, computes, etc...) needs to have a copy of the repository file from our server where we host the local mirrors. To achieve it, you can include an script that downloads the osp<version>.repo file when deploying.
This can be achieved with Heat templates:
heat_template_version: 2014-10-16
description: >
  File for downloading the repository file from the asset server
resources:
  userdata:
    type: OS::Heat::MultipartMime
    properties:
      parts:
      - config: {get_resource: assetrepo_config}
  assetrepo_config:
    type: OS::Heat::SoftwareConfig
    properties:
      config: |
        #!/bin/bash
        sudo curl http://<local_mirror_ip>/osp<version>_repo/osp<version>.repo -o /etc/yum.repos.d/osp<version>.repo
outputs:
  OS::stack_id:
    value: {get_resource: userdata}

And then creating the environment file that will reference it:
resource_registry:
  OS::TripleO::NodeUserData: /home/stack/templates/asset-repo.yaml

Then you need to include that environment file in your deploy command:
openstack overcloud deploy --templates  -e ~/templates/asset-environment.yaml \
[OTHER OPTIONS]

Upgrading TripleO

Undercloud

To upgrade the undercloud, you need to be sure to disable packages from previous versions, and enable the ones from the new version.
If using local mirrors you may need to remove  the older repo file:
sudo rm /etc/yum.repos.d/osp.repo
And create the new:
sudo curl http://<local_mirror_ip>/osp<version+1>_repo/osp<version+1>.repo -o /etc/yum.repos.d/osp.repo

After that, execute the yum update and openstack undercloud upgrade commands as usual.

Overcloud

To upgrade the overcloud, you need to disable the older repositories and enable the new ones at upgrade time. This is achieved by a parameter that is called UpgradeInitCommand. This needs to contain a custom bash script, that will disable the older repos and enable the new ones, based on your needs.
A sample environment using UpgradeInitCommand can be:
cat > overcloud-repos.yaml <<EOF
parameter_defaults:
  UpgradeInitCommand: |
    set -e
    # REPOSITORY SWITCH COMMANDS GO HERE 
    sudo rm /etc/yum.repos.d/osp.repo
    sudo curl http://<local_mirror_ip>/osp<version+1>_repo/osp<version+1>.repo -o /etc/yum.repos.d/osp.repo 
    EOF

And then execute your upgrade command including that overcloud-repos.yaml file:
openstack overcloud deploy --templates \
    -e <full environment> \
    -e /usr/share/openstack-tripleo-heat-templates/environments/major-upgrade-composable-steps.yaml \
    -e overcloud-repos.yaml
 

Comments

Popular posts from this blog

Enable UEFI PXE boot in Supermicro SYS-E200

When provisioning my Supermicro SYS-E200-8D machines (X10 motherboard), i had the need to enable UEFI boot mode, and provision through PXE. This may seem straightforward, but there is a set of BIOS settings that need to be changed in order to enable it. First thing is to enable EFI on LAN , and enable Network Stack. To do that, enter into BIOS > Advanced > PCIe/PCI/PnP configuration and check that your settings match the following: See that PCI-E have EFI firmware loaded. Same for Onboard LAN OPROM and Onboard Video OPROM. And UEFI Network stack is enabled , as well as IPv4 PXE/IPv6 PXE support. Next thing is to modify boot settings. The usual boot order for PXE is to first add hard disk and second PXE network . The PXE tools (for example Ironic) will set a temporary boot order for PXE (one time) to enable the boot from network, but then the reboot will be done from hard disk. So be sure that your boot order matches the following: See that the first order is hard d

Test API endpoint with netcat

Do you need a simple way to validate that an API endpoint is responsive, but you don't want to use curl? There is a simple way to validate the endpoint with nc, producing an output that can be redirected to a logfile and parsed later: URL=$1 PORT=$2 while true; do     RESULT=$(nc -vz $URL $PORT 2>&1)     DATE=$(date)     echo $DATE $RESULT     sleep 1 done You can all this script with the API URL as first parameter, and API port as the second. netcat will be accessing to that endpoint and will report the results, detecting when the API is down. We also can output the date to have a reference when failures are detected. The produced output will be something like: vie jun 26 08:19:28 UTC 2020 Ncat: Version 7.70 ( https://nmap.org/ncat ) Ncat: Connected to 192.168.111.3:6443. Ncat: 0 bytes sent, 0 bytes received in 0.01 seconds. vie jun 26 08:19:29 UTC 2020 Ncat: Version 7.70 ( https://nmap.org/ncat ) Ncat: Connected to 192.168.111.3:6443. Ncat: 0 bytes sent, 0 bytes

Create and restore external backups of virtual machines with libvirt

A common need for deployments in production, is to have the possibility of taking backups of your working virtual machines, and export them to some external storage. Although libvirt offers the possibility of taking snapshots and restore them, those snapshots are intended to be managed locally, and are lost when you destroy your virtual machines. There may be the need to just trash all your environment, and re-create the virtual machines from an external backup, so this article offers a procedure to achieve it. First step, create an external snapshot So the first step will be taking an snapshot from your running vm. The best way to take an isolated backup is using blockcopy virsh command. So, how to proceed? 1. First you need to extract all the disks that your vm has. This can be achieved with domblklist command:   DISK_NAME=$(virsh domblklist {{domain}} --details | grep 'disk' | awk '{print $3}') This will extract the name of the device that the vm is using