Skip to main content

Automated OSP deployments with Tripleo Quickstart

In this article I'm going to show a method for automating OSP (RedHat OpenStack platform) deployments. These automated deployments can be very useful for CI, or simply to experiment and test with the system.

Components involved

  • TOAD: set of playbooks to deploy Jenkins, jenkins-job-builder and an optional ELK stack. This will install a ready to use system with all the preconfigured jobs (including OSP10 deployments and image building).
  • TOAD jenkins-jobs: A set of job templates and macros, using jenkins-job-builder syntax, that get converted into Jenkins jobs for building the OSP base images and for deploying the system.
  • TOAD job-configs: A set of job configurations to be used along with the jenkins-jobs repo. It provides a set of basic configs to build OpenStack using RDO or OSP.
  • TripleO quickstart: set of ansible playbooks, used for building images and for RDO/OSP deployments.

System requirements

  • One VM to run a Jenkins master + nginx provided by TOAD. If you want to deploy additional ELK stack, you will need extra VMS for ElasticSearch, LogStash and Kibana.
  • For virtualized OSP deployments: one baremetal server, with at least 16GB of RAM and 60GB of disk, multi-core. This will act as jenkins slave, and will hold the virtualized undercloud, controller and compute nodes.
  • For baremetal OSP deployments: same requirement as the virtualized one, to hold jenkins slave + undercloud. Two additional baremetal servers, with at least 8GB of RAM, to act as controller and compute nodes.
Please note that the servers acting as slaves will need to have a valid RedHat subscription.

Deployment steps

1. Setup the system

If you rely on TOAD, the initial steps will be completely automated. You will need to follow TOAD documentation for the initial deployment. This will create a Jenkins/nginx VM for you, and will enroll one slave as well, configuring that properly. It will also populate all the Jenkins jobs for you, leaving a system ready to use.
Please note that before running the playbook, you need to create ~/.ansible/vars/toad_vars.yml file properly. The following settings are needed for this use case:
  • rhn_subscription_username: <<your RHN username>>
  • rhn_subscription_password: <<your RHN password>>
  • rhn_subscription_pool_id: <<your RHN pool id>> 
  • slave_mirror_sync: true
This will register the Jenkins slave into the RedHat network, and will create a local mirror, with all the needed RedHat repos, that you can use later on your image builds and deploys.

After the deploy has finished, you can access Jenkins master on the configured IP/hostname, and you will see all the jobs available:

2. Get base images

For OSP 10 deployments, we better need to rely on RHEL 7.3 guest image. However, the packages are not still providing this version, so we'll need to download manually.
In order to do it, open a browser and login into with your credentials. Then access to and copy the KVM guest image link into your clipboard.
SSH into your Jenkins slave and execute the following commands:
mkdir /opt/rhel_guest_images
wget -O /opt/rhel_guest_images/rhel-guest-image-7.3.qcow2 <<link_for_7.3_guest>> 

This will make the base image available for the OSP image builds. This process only needs to be done once for each slave you register.

3. Execute image build job

In order to deploy OSP successfully, we first need to generate the undercloud and overcloud base images. This can be done executing a pre-configured job, that takes the previous 7.3 guest image, and composes the desired images, to be reused later by the OSP deployments.
Please note that this job needs to be executed only once, per each enrolled slave on the system.
Running the job is as easy as to login into the Jenkins system, and clicking execute on the build job you need. In our case, you will need to execute the oooq-newton-osp10-build-cira job:

This job will use TripleO quickstart , along with some playbooks that extend it, to compose the undercloud.qcow2 base image. Overcloud and ironic-python-agent images are embedded inside undercloud image, under the /home/stack directory.
When the job finishes, it moves the final images under /home/stack/images/osp<<version>> directory, ready to be reused by the deployment job.

3. Execute deployment job

Once the base images are ready, it's time to launch OSP deployments. Running it is as simple as executing the right job. In that case, for an OSP 10 deployment you can run the oooq-osp10-osp-deploy-cira job.
This will use TripleO quickstart for the deploy, and will rely on the previously generated base images, as well as on the local repo that was created,  to launch a successful OpenStack deployment.
After it finishes, undercloud and overcloud nodes will be available.

4. Accessing the undercloud and overcloud

You then can  SSH into your slave, and switch to stack user. The undercloud can be accessed with the command:
ssh -F $HOME/quickstart/ssh.config.ansible undercloud
Once on the undercloud, you can source the credentials file:
source $HOME/stackrc
 And then execute OpenStack commands. You can get the overcloud nodes by executing nova list command. This will get a list of nodes involved, with their ips:

nova list
| ID                                   | Name                    | Status | Task State | Power State | Networks               |
| 3d4a79d1-53ea-4f32-b496-fbdcbbb6a5a3 | overcloud-controller-0  | ACTIVE | -          | Running     | ctlplane= |
| 4f8acb6d-6394-4193-a6c6-50d8731fad7d | overcloud-novacompute-0 | ACTIVE | -          | Running     | ctlplane=  |

Then you can access the nodes using heat-admin user:

ssh heat-admin@

5. Accessing the logs

Please note that you can access all the logs for the jobs into the same Jenkins VM. In order to get the logs, you can access to http://<<jenkins_configured_hostname>>/logs. That will show a list of jobs, and for each one, there will be folders storing complete set of logs for each build:

So that's it! After following these steps you will have a working OSP on your systems, that you can use for experimenting, CI, or any other purposes. Enjoy!


  1. Nice blog... This blog is helpful for me to understand OpenStack development. Is OpenStack deployment tools more helpful in OpenStack deployment

  2. DIAC Automation Gallery | DIAC Lab Photos | PLC Automation Course Images
    Our image gallery is a wonderful section. View the gallery to see lab, building, infrastructure, employee, students photos. Call for more updates 9310096831.


Post a Comment

Popular posts from this blog

Deploying and upgrading TripleO with local mirrors

Continued from

In the previous blogpost, I explained how to automate the RHEL mirror creation using Now we are going to learn how to deploy and upgrade TripleO using those.
Deploying TripleO Undercloud To use local mirrors in the undercloud, you simply need to get the generated osp<version>.repo that you generated with the rhel-local-mirrors playbook, and copy it to /etc/yum.repos.d/ , in the undercloud host:
sudo curl http://<local_mirror_ip>/osp<version>_repo/osp<version>.repo \ -o /etc/yum.repos.d/osp.repo Then proceed with the standard instructions for deploy.
Overcloud Each node from the overcloud (controllers, computes, etc...) needs to have a copy of the repository file from our server where we host the local mirrors. To achieve it, you can include an script that downloads the osp<version>.repo file when deployi…

Build and use security hardened images with TripleO

Starting to apply since Pike Concept of security hardened images Normally the images used for overcloud deployment in TripleO are not security hardened. It means, the images lack all the extra security measures needed to accomplish with ANSSI requirements. These extra measures are needed to deploy TripleO in environments where security is an important feature.
The following recommendations are given to accomplish with security guidelines:
ensure that /tmp is mounted on a separate volume or partition, and that it is mounted with rw,nosuid,nodev,noexec,relatime flagsensure that /var, /var/log and /var/log/audit are mounted on separates volumes or partitions, and that are mounted with rw,relatime flags.ensure that /home is mounted on a separate partition or volume, and that it is mounted with rw,nodev,relatime flags.include extra kernel boot flag to enable auditing: add audit=1 to GRUB_CMDLINE_LINUX settingdisable kernel support for USB via bootloader configuration: add nousb to GRUB_CMD…

Automating local mirrors creation in RHEL

Sometimes there is a need to consume RHEL mirrors locally, not using the Red Hat content delivery network. It may be needed to speed up some deployment, or due to network constraints.

I create an ansible playbook, rhel-local-mirrors (, that can help with that.
What does rhel-local-mirrors do? It is basically a tool that connects to the Red Hat CDN, and syncs the repositories locally, allowing to populate the desired mirrors, that can be accessed by other systems via HTTP.

The playbook is performing several tasks, that can be run together or independently:
register a system on the Red Hat Networkprepare the system to host mirrorscreate the specified mirrorsschedule automatic updates of the mirrors How to use it?It is an Ansible playbook, so start by installing it, in any prefered format. Then continue by cloning the playbook:
git clone playbook expects a group of servers called