Skip to main content

Integration of Kubernetes + OpenStack using Ansible

Kubernetes 1.0 has been lately released - it was announced at OSCON on July 21st.
As an OpenStack user, and regular OpenStack infra contributor, I can see a high value on Kubernetes + OpenStack integration.

For that reason, I started a project, based on https://github.com/GoogleCloudPlatform/kubernetes/tree/master/contrib/ansible , to automate a Kubernetes cluster using OpenStack (used HP Cloud as a sample).

In this post, I will be describing the steps needed to deploy a whole Kubernetes cluster in few steps. All source files used are stored on https://github.com/kubestack/kubestack.

Before starting

So, what do you need to start? You will need a cloud (HP Cloud, Rackspace...) and a service account with enough permissions to manage instances.

You will need ansible ... ideally, you should download ansible branch on https://github.com/ansible/ and use develop branch to work. If you execute:

$ source ./hacking/env-setup
 
You will be able to run ansible from that branch.

We will be using OpenStack dynamic inventory plugin for Ansible. So be sure you have latest versions of https://github.com/openstack-infra/shade/blob/master/doc/source/installation.rst and https://github.com/openstack/os-client-config.

Node provisioning

We will be starting by manual provision of the instances. What you will need:
  • Boot an instance on your cloud to be used as master. Ensure that you add the meta "groups=masters" to it.
  • Boot an instance to be used as etcd server (you can share this service with the master one). Ensure that you add the meta "groups=etcd" to the instance. If you share etcd with master, please add "groups=masters,etcd" to it.
  • Boot as much instances on your cloud as minions you need. Ensure that you add the meta "groups=nodes" to it.

os-client-config setup

In order to use OpenStack dynamic inventory we rely on shade and os-client config. So we need to create a file /etc/ansible/openstack.yml with following content:

cache:
  max_age: 0
clouds:
  cloud_name:
    cloud: hp
    auth:
      username: cloud_username
      password: XXXX
      project_name: cloud_project_name
    region_name: cloud_region

Clone ansible playbooks and add extra roles

Clone the playbooks from https://github.com/GoogleCloudPlatform/kubernetes and move to contrib/ansible directory.

Be sure to update cluster.yml with all settings we need for OpenStack (you can check a sample https://github.com/kubestack/kubestack/blob/master/cluster.yml ).
In several OpenStack clouds (HP cloud), name resolution is not working properly. To fix that, be sure to add set_hostname_and_etc_hosts role to cluster.yml, and also be sure to add the code for the role (https://github.com/kubestack/kubestack/tree/master/roles/set_hostname_and_etc_hosts/tasks).
You need to add all the instance names to the instances_list var on cluster.yml.

Configure Cluster options

Look though all of the options in group_vars/all.yml and set the variables to reflect your needs. The options are described there in full detail. You can take https://github.com/kubestack/kubestack/blob/master/group_vars/all.yml as a sample.

Running the playbook

After going through the setup, run the setup script provided on https://github.com/kubestack/kubestack/blob/master/setup.sh . It is the same as the one provided by Google, but using dynamic inventory from Ansible. You simply can run it using:
$ ./setup.sh

You may override the inventory file by doing:
INVENTORY=/opt/stack/ansible/contrib/inventory/openstack.py ./setup.sh

Where that path points to Ansible OpenStack dynamic inventory.

That command will start deployment of your Kubernetes cluster on OpenStack. Please wait for a bit for Ansible to conclude deployment, and you'll have your Kubernetes cluster up and running!

Credits

Thanks to my colleague Ricardo Carrillo Cruz for helping with Ansible first steps, and providing the set_hostname_and_etc_hosts playbook. And thanks to all contributors of Shade and os-client-config that made that integration possible.


Comments

  1. Wow... welcome kubernetes 1.0... Kubernetes OpenStack is one step ahead with this announcement. Thanks for sharing

    ReplyDelete

Post a Comment

Popular posts from this blog

Enable UEFI PXE boot in Supermicro SYS-E200

When provisioning my Supermicro SYS-E200-8D machines (X10 motherboard), i had the need to enable UEFI boot mode, and provision through PXE. This may seem straightforward, but there is a set of BIOS settings that need to be changed in order to enable it. First thing is to enable EFI on LAN , and enable Network Stack. To do that, enter into BIOS > Advanced > PCIe/PCI/PnP configuration and check that your settings match the following: See that PCI-E have EFI firmware loaded. Same for Onboard LAN OPROM and Onboard Video OPROM. And UEFI Network stack is enabled , as well as IPv4 PXE/IPv6 PXE support. Next thing is to modify boot settings. The usual boot order for PXE is to first add hard disk and second PXE network . The PXE tools (for example Ironic) will set a temporary boot order for PXE (one time) to enable the boot from network, but then the reboot will be done from hard disk. So be sure that your boot order matches the following: See that the first order is hard d...

Test API endpoint with netcat

Do you need a simple way to validate that an API endpoint is responsive, but you don't want to use curl? There is a simple way to validate the endpoint with nc, producing an output that can be redirected to a logfile and parsed later: URL=$1 PORT=$2 while true; do     RESULT=$(nc -vz $URL $PORT 2>&1)     DATE=$(date)     echo $DATE $RESULT     sleep 1 done You can all this script with the API URL as first parameter, and API port as the second. netcat will be accessing to that endpoint and will report the results, detecting when the API is down. We also can output the date to have a reference when failures are detected. The produced output will be something like: vie jun 26 08:19:28 UTC 2020 Ncat: Version 7.70 ( https://nmap.org/ncat ) Ncat: Connected to 192.168.111.3:6443. Ncat: 0 bytes sent, 0 bytes received in 0.01 seconds. vie jun 26 08:19:29 UTC 2020 Ncat: Version 7.70 ( https://nmap.org/ncat ) Ncat: Connec...

Create and restore external backups of virtual machines with libvirt

A common need for deployments in production, is to have the possibility of taking backups of your working virtual machines, and export them to some external storage. Although libvirt offers the possibility of taking snapshots and restore them, those snapshots are intended to be managed locally, and are lost when you destroy your virtual machines. There may be the need to just trash all your environment, and re-create the virtual machines from an external backup, so this article offers a procedure to achieve it. First step, create an external snapshot So the first step will be taking an snapshot from your running vm. The best way to take an isolated backup is using blockcopy virsh command. So, how to proceed? 1. First you need to extract all the disks that your vm has. This can be achieved with domblklist command:   DISK_NAME=$(virsh domblklist {{domain}} --details | grep 'disk' | awk '{print $3}') This will extract the name of the device that the vm is using ...