Skip to main content

Automated OSP deployments with Tripleo Quickstart

In this article I'm going to show a method for automating OSP (RedHat OpenStack platform) deployments. These automated deployments can be very useful for CI, or simply to experiment and test with the system.

Components involved

  • TOAD: set of playbooks to deploy Jenkins, jenkins-job-builder and an optional ELK stack. This will install a ready to use system with all the preconfigured jobs (including OSP10 deployments and image building).
  • TOAD jenkins-jobs: A set of job templates and macros, using jenkins-job-builder syntax, that get converted into Jenkins jobs for building the OSP base images and for deploying the system.
  • TOAD job-configs: A set of job configurations to be used along with the jenkins-jobs repo. It provides a set of basic configs to build OpenStack using RDO or OSP.
  • TripleO quickstart: set of ansible playbooks, used for building images and for RDO/OSP deployments.

System requirements

  • One VM to run a Jenkins master + nginx provided by TOAD. If you want to deploy additional ELK stack, you will need extra VMS for ElasticSearch, LogStash and Kibana.
  • For virtualized OSP deployments: one baremetal server, with at least 16GB of RAM and 60GB of disk, multi-core. This will act as jenkins slave, and will hold the virtualized undercloud, controller and compute nodes.
  • For baremetal OSP deployments: same requirement as the virtualized one, to hold jenkins slave + undercloud. Two additional baremetal servers, with at least 8GB of RAM, to act as controller and compute nodes.
Please note that the servers acting as slaves will need to have a valid RedHat subscription.

Deployment steps

1. Setup the system

If you rely on TOAD, the initial steps will be completely automated. You will need to follow TOAD documentation for the initial deployment. This will create a Jenkins/nginx VM for you, and will enroll one slave as well, configuring that properly. It will also populate all the Jenkins jobs for you, leaving a system ready to use.
Please note that before running the playbook, you need to create ~/.ansible/vars/toad_vars.yml file properly. The following settings are needed for this use case:
  • rhn_subscription_username: <<your RHN username>>
  • rhn_subscription_password: <<your RHN password>>
  • rhn_subscription_pool_id: <<your RHN pool id>> 
  • slave_mirror_sync: true
This will register the Jenkins slave into the RedHat network, and will create a local mirror, with all the needed RedHat repos, that you can use later on your image builds and deploys.

After the deploy has finished, you can access Jenkins master on the configured IP/hostname, and you will see all the jobs available:

2. Get base images

For OSP 10 deployments, we better need to rely on RHEL 7.3 guest image. However, the packages are not still providing this version, so we'll need to download manually.
In order to do it, open a browser and login into access.redhat.com with your credentials. Then access to https://access.redhat.com/downloads/content/69/ver=/rhel---7/7.3/x86_64/product-software and copy the KVM guest image link into your clipboard.
SSH into your Jenkins slave and execute the following commands:
mkdir /opt/rhel_guest_images
wget -O /opt/rhel_guest_images/rhel-guest-image-7.3.qcow2 <<link_for_7.3_guest>> 

This will make the base image available for the OSP image builds. This process only needs to be done once for each slave you register.

3. Execute image build job

In order to deploy OSP successfully, we first need to generate the undercloud and overcloud base images. This can be done executing a pre-configured job, that takes the previous 7.3 guest image, and composes the desired images, to be reused later by the OSP deployments.
Please note that this job needs to be executed only once, per each enrolled slave on the system.
Running the job is as easy as to login into the Jenkins system, and clicking execute on the build job you need. In our case, you will need to execute the oooq-newton-osp10-build-cira job:


This job will use TripleO quickstart , along with some playbooks that extend it, to compose the undercloud.qcow2 base image. Overcloud and ironic-python-agent images are embedded inside undercloud image, under the /home/stack directory.
When the job finishes, it moves the final images under /home/stack/images/osp<<version>> directory, ready to be reused by the deployment job.

3. Execute deployment job

Once the base images are ready, it's time to launch OSP deployments. Running it is as simple as executing the right job. In that case, for an OSP 10 deployment you can run the oooq-osp10-osp-deploy-cira job.
This will use TripleO quickstart for the deploy, and will rely on the previously generated base images, as well as on the local repo that was created,  to launch a successful OpenStack deployment.
After it finishes, undercloud and overcloud nodes will be available.

4. Accessing the undercloud and overcloud

You then can  SSH into your slave, and switch to stack user. The undercloud can be accessed with the command:
ssh -F $HOME/quickstart/ssh.config.ansible undercloud
Once on the undercloud, you can source the credentials file:
source $HOME/stackrc
 And then execute OpenStack commands. You can get the overcloud nodes by executing nova list command. This will get a list of nodes involved, with their ips:

nova list
+--------------------------------------+-------------------------+--------+------------+-------------+------------------------+
| ID                                   | Name                    | Status | Task State | Power State | Networks               |
+--------------------------------------+-------------------------+--------+------------+-------------+------------------------+
| 3d4a79d1-53ea-4f32-b496-fbdcbbb6a5a3 | overcloud-controller-0  | ACTIVE | -          | Running     | ctlplane=192.168.24.16 |
| 4f8acb6d-6394-4193-a6c6-50d8731fad7d | overcloud-novacompute-0 | ACTIVE | -          | Running     | ctlplane=192.168.24.8  |
+--------------------------------------+-------------------------+--------+------------+-------------+------------------------+

Then you can access the nodes using heat-admin user:

ssh heat-admin@192.168.124.16

5. Accessing the logs

Please note that you can access all the logs for the jobs into the same Jenkins VM. In order to get the logs, you can access to http://<<jenkins_configured_hostname>>/logs. That will show a list of jobs, and for each one, there will be folders storing complete set of logs for each build:


So that's it! After following these steps you will have a working OSP on your systems, that you can use for experimenting, CI, or any other purposes. Enjoy!

Comments

  1. Nice blog... This blog is helpful for me to understand OpenStack development. Is OpenStack deployment tools more helpful in OpenStack deployment

    ReplyDelete
  2. DIAC Automation Gallery | DIAC Lab Photos | PLC Automation Course Images
    http://diac.co.in/gallery/
    Our image gallery is a wonderful section. View the gallery to see lab, building, infrastructure, employee, students photos. Call for more updates 9310096831.

    ReplyDelete

Post a Comment

Popular posts from this blog

Build and use security hardened images with TripleO

Starting to apply since Pike Concept of security hardened images Normally the images used for overcloud deployment in TripleO are not security hardened. It means, the images lack all the extra security measures needed to accomplish with ANSSI requirements. These extra measures are needed to deploy TripleO in environments where security is an important feature.
The following recommendations are given to accomplish with security guidelines:
ensure that /tmp is mounted on a separate volume or partition, and that it is mounted with rw,nosuid,nodev,noexec,relatime flagsensure that /var, /var/log and /var/log/audit are mounted on separates volumes or partitions, and that are mounted with rw,relatime flags.ensure that /home is mounted on a separate partition or volume, and that it is mounted with rw,nodev,relatime flags.include extra kernel boot flag to enable auditing: add audit=1 to GRUB_CMDLINE_LINUX settingdisable kernel support for USB via bootloader configuration: add nousb to GRUB_CMD…

How to deploy TripleO Queens without external network

TripleO Queens has an interesting feature that is called 'composable networks'. It allows to deploy Openstack with the choice of networks that you want, depending on your environment. Please see: https://docs.openstack.org/tripleo-docs/latest/install/advanced_deployment/custom_networks.html

By default, the following networks are defined:
StorageStorage ManagementInternal ApiTenantManagementExternal The external network allows to reach the endpoints externally, and also to define networks to reach the vms externally as well. But to have that, it is needed to have a network with external access, routable, on your lab. Not all labs have it, specially for CI environments, so it may be useful to deploy without it, and just have internal access to endpoints and vms. In this blogpost i'm just going to explain how to achieve it.

First make a copy of your original tripleo-heat-templates, to another directory /home/stack/working-templates, and edit the following files:
network_data.…

Setup an NFS client provisioner in Kubernetes

Setup an NFS client provisioner in Kubernetes One of the most common needs when deploying Kubernetes is the ability to use shared storage. While there are several options available, one of the most commons and easier to setup is to use an NFS server.
This post will explain how to setup a dynamic NFS client provisioner on Kubernetes, relying on an existing NFS server on your systems.
Step 1. Setup an NFS server (sample for CentOS) First thing you will need, of course, is to have an NFS server. This can be easily achieved with some easy steps:

Install nfs package: yum install -y nfs-utils Enable and start nfs service and rpcbind:
systemctl enable rpcbind
systemctl enable nfs-server
systemctl start rpcbind
systemctl start nfs-server
Create the directory that will be shared by NFS, and change the permissions:
mkdir /var/nfsshare
chmod -R 755 /var/nfsshare
chown nfsnobody:nfsnobody /var/nfsshare
 Share the NFS directory over the network, creating the /etc/exports file:
vi /etc/exports
/var/nfsshare …