Skip to main content

How to deploy TripleO Queens without external network

TripleO Queens has an interesting feature that is called 'composable networks'. It allows to deploy Openstack with the choice of networks that you want, depending on your environment. Please see: https://docs.openstack.org/tripleo-docs/latest/install/advanced_deployment/custom_networks.html

By default, the following networks are defined:
  • Storage
  • Storage Management
  • Internal Api
  • Tenant
  • Management
  • External
The external network allows to reach the endpoints externally, and also to define networks to reach the vms externally as well. But to have that, it is needed to have a network with external access, routable, on your lab. Not all labs have it, specially for CI environments, so it may be useful to deploy without it, and just have internal access to endpoints and vms. In this blogpost i'm just going to explain how to achieve it.

First make a copy of your original tripleo-heat-templates, to another directory /home/stack/working-templates, and edit the following files:

network_data.yaml

This file will contain the network definitions by default. You will need to edit it and remove all the external definitions from there. Remove that bits:

- name: External
  vip: true
  name_lower: external
  vlan: 10
  ip_subnet: '10.0.0.0/24'
  allocation_pools: [{'start': '10.0.0.4', 'end': '10.0.0.250'}]
  gateway_ip: '10.0.0.1'
  ipv6_subnet: '2001:db8:fd00:1000::/64'
  ipv6_allocation_pools: [{'start': '2001:db8:fd00:1000::10', 'end': '2001:db8:fd00:1000:ffff:ffff:ffff:fffe'}]
  gateway_ipv6: '2001:db8:fd00:1000::1'


 Also edit all the other values, to match the settings of your lab.

roles_data.yaml

This file contains the definitions for each role, including the networks that each role is expecting to contain. You need to edit this file and remove the external network from the Controller:

- name: Controller
  ...
  tags:
    - primary
    - controller
  networks:
    - External -> remove that


And you also need to edit the default route of the controller, to stop  using the External network as default, and start using the ControlPlane:

default_route_networks: ['External'] -> default_route_networks: ['ControlPlane']

network/service_net_map.j2.yaml

This file contains the mapping of services and networks. It needs to be edited to modify the network assigned to public. Instead of external, it needs to be mapped to internal_api:

PublicNetwork: external -> PublicNetwork: internal_api

puppet/all-nodes-config.j2.yaml

This file contains puppet configuration for nodes, and has some values that are referencing External network. They need to be changed to point to Internal api:

tripleo::haproxy::public_virtual_ip: {get_param: [NetVipMap, {get_param: ExternalNetName}]} -> tripleo::haproxy::public_virtual_ip: {get_param: [NetVipMap, {get_param: InternalApiNetName}]}
tripleo::keepalived::public_virtual_ip: {get_param: [NetVipMap, {get_param: ExternalNetName}]} ->
tripleo::keepalived::public_virtual_ip: {get_param: [NetVipMap, {get_param: InternalApiNetName}]}

public_virtual_ip: {get_param: [NetVipMap, {get_param: ExternalNetName}]} -> public_virtual_ip: {get_param: [NetVipMap, {get_param: InternalApiNetName}]}

Generate the templates

Once all these files have been edited, the templates can be generated. To do that, there is  a python helper script to achieve that. Inside your working directory, check tools/process-templates.py. It can accept several parameters like:
  • -p -> specify the base path where to collect the templates from 
  • -r -> roles_data file to consume 
  • -n -> network_data file to consume 
  • -o -> output_dir where to generate the target templates
Execute this commands with those flags and this will generate the final templates for you. Then you can include then in your deploy command. A sample deploy command for that can be:

openstack overcloud deploy --templates ./templates -r ./templates/roles_data.yaml -e ./templates/docker-images.yaml -e ./templates/environments/net-single-nic-with-vlans-no-external.yaml -e ./templates/environments/network-environment.yaml

In this case an extra environment net-single-nic-with-vlans-no-external is included, to be able to deploy with just 1 nic using different vlans, and without having an external network. A sample of the generated templates using that method can be found at: https://github.com/redhat-nfvpe/toad_envs/blob/master/13_no_external_sample_environment 

Following those steps you will have your OpenStack cloud deployed without external network, just using internal endpoints, that will be useful for testing and CI purposes.






Comments

  1. Thanks for sharing such important information !
    keep it up!

    Openstack Training

    ReplyDelete
  2. Nice post ! Thanks for sharing valuable information with us. Keep sharing..
    DevOps and Cloud Course Videos

    ReplyDelete
  3. This comment has been removed by the author.

    ReplyDelete
  4. This comment has been removed by the author.

    ReplyDelete

Post a Comment

Popular posts from this blog

Enable UEFI PXE boot in Supermicro SYS-E200

When provisioning my Supermicro SYS-E200-8D machines (X10 motherboard), i had the need to enable UEFI boot mode, and provision through PXE. This may seem straightforward, but there is a set of BIOS settings that need to be changed in order to enable it. First thing is to enable EFI on LAN , and enable Network Stack. To do that, enter into BIOS > Advanced > PCIe/PCI/PnP configuration and check that your settings match the following: See that PCI-E have EFI firmware loaded. Same for Onboard LAN OPROM and Onboard Video OPROM. And UEFI Network stack is enabled , as well as IPv4 PXE/IPv6 PXE support. Next thing is to modify boot settings. The usual boot order for PXE is to first add hard disk and second PXE network . The PXE tools (for example Ironic) will set a temporary boot order for PXE (one time) to enable the boot from network, but then the reboot will be done from hard disk. So be sure that your boot order matches the following: See that the first order is hard d

Test API endpoint with netcat

Do you need a simple way to validate that an API endpoint is responsive, but you don't want to use curl? There is a simple way to validate the endpoint with nc, producing an output that can be redirected to a logfile and parsed later: URL=$1 PORT=$2 while true; do     RESULT=$(nc -vz $URL $PORT 2>&1)     DATE=$(date)     echo $DATE $RESULT     sleep 1 done You can all this script with the API URL as first parameter, and API port as the second. netcat will be accessing to that endpoint and will report the results, detecting when the API is down. We also can output the date to have a reference when failures are detected. The produced output will be something like: vie jun 26 08:19:28 UTC 2020 Ncat: Version 7.70 ( https://nmap.org/ncat ) Ncat: Connected to 192.168.111.3:6443. Ncat: 0 bytes sent, 0 bytes received in 0.01 seconds. vie jun 26 08:19:29 UTC 2020 Ncat: Version 7.70 ( https://nmap.org/ncat ) Ncat: Connected to 192.168.111.3:6443. Ncat: 0 bytes sent, 0 bytes

Setup an NFS client provisioner in Kubernetes

Setup an NFS client provisioner in Kubernetes One of the most common needs when deploying Kubernetes is the ability to use shared storage. While there are several options available, one of the most commons and easier to setup is to use an NFS server. This post will explain how to setup a dynamic NFS client provisioner on Kubernetes, relying on an existing NFS server on your systems. Step 1. Setup an NFS server (sample for CentOS) First thing you will need, of course, is to have an NFS server. This can be easily achieved with some easy steps: Install nfs package: yum install -y nfs-utils Enable and start nfs service and rpcbind: systemctl enable rpcbind systemctl enable nfs-server systemctl start rpcbind systemctl start nfs-server Create the directory that will be shared by NFS, and change the permissions: mkdir /var/nfsshare chmod -R 755 /var/nfsshare chown nfsnobody:nfsnobody /var/nfsshare  Share the NFS directory over the network, creating the /etc/exports file: vi /