Skip to main content

Integration of Kubernetes + OpenStack using Ansible

Kubernetes 1.0 has been lately released - it was announced at OSCON on July 21st.
As an OpenStack user, and regular OpenStack infra contributor, I can see a high value on Kubernetes + OpenStack integration.

For that reason, I started a project, based on https://github.com/GoogleCloudPlatform/kubernetes/tree/master/contrib/ansible , to automate a Kubernetes cluster using OpenStack (used HP Cloud as a sample).

In this post, I will be describing the steps needed to deploy a whole Kubernetes cluster in few steps. All source files used are stored on https://github.com/kubestack/kubestack.

Before starting

So, what do you need to start? You will need a cloud (HP Cloud, Rackspace...) and a service account with enough permissions to manage instances.

You will need ansible ... ideally, you should download ansible branch on https://github.com/ansible/ and use develop branch to work. If you execute:

$ source ./hacking/env-setup
 
You will be able to run ansible from that branch.

We will be using OpenStack dynamic inventory plugin for Ansible. So be sure you have latest versions of https://github.com/openstack-infra/shade/blob/master/doc/source/installation.rst and https://github.com/openstack/os-client-config.

Node provisioning

We will be starting by manual provision of the instances. What you will need:
  • Boot an instance on your cloud to be used as master. Ensure that you add the meta "groups=masters" to it.
  • Boot an instance to be used as etcd server (you can share this service with the master one). Ensure that you add the meta "groups=etcd" to the instance. If you share etcd with master, please add "groups=masters,etcd" to it.
  • Boot as much instances on your cloud as minions you need. Ensure that you add the meta "groups=nodes" to it.

os-client-config setup

In order to use OpenStack dynamic inventory we rely on shade and os-client config. So we need to create a file /etc/ansible/openstack.yml with following content:

cache:
  max_age: 0
clouds:
  cloud_name:
    cloud: hp
    auth:
      username: cloud_username
      password: XXXX
      project_name: cloud_project_name
    region_name: cloud_region

Clone ansible playbooks and add extra roles

Clone the playbooks from https://github.com/GoogleCloudPlatform/kubernetes and move to contrib/ansible directory.

Be sure to update cluster.yml with all settings we need for OpenStack (you can check a sample https://github.com/kubestack/kubestack/blob/master/cluster.yml ).
In several OpenStack clouds (HP cloud), name resolution is not working properly. To fix that, be sure to add set_hostname_and_etc_hosts role to cluster.yml, and also be sure to add the code for the role (https://github.com/kubestack/kubestack/tree/master/roles/set_hostname_and_etc_hosts/tasks).
You need to add all the instance names to the instances_list var on cluster.yml.

Configure Cluster options

Look though all of the options in group_vars/all.yml and set the variables to reflect your needs. The options are described there in full detail. You can take https://github.com/kubestack/kubestack/blob/master/group_vars/all.yml as a sample.

Running the playbook

After going through the setup, run the setup script provided on https://github.com/kubestack/kubestack/blob/master/setup.sh . It is the same as the one provided by Google, but using dynamic inventory from Ansible. You simply can run it using:
$ ./setup.sh

You may override the inventory file by doing:
INVENTORY=/opt/stack/ansible/contrib/inventory/openstack.py ./setup.sh

Where that path points to Ansible OpenStack dynamic inventory.

That command will start deployment of your Kubernetes cluster on OpenStack. Please wait for a bit for Ansible to conclude deployment, and you'll have your Kubernetes cluster up and running!

Credits

Thanks to my colleague Ricardo Carrillo Cruz for helping with Ansible first steps, and providing the set_hostname_and_etc_hosts playbook. And thanks to all contributors of Shade and os-client-config that made that integration possible.


Comments

Post a Comment

Popular posts from this blog

Build and use security hardened images with TripleO

Starting to apply since Pike Concept of security hardened images Normally the images used for overcloud deployment in TripleO are not security hardened. It means, the images lack all the extra security measures needed to accomplish with ANSSI requirements. These extra measures are needed to deploy TripleO in environments where security is an important feature.
The following recommendations are given to accomplish with security guidelines:
ensure that /tmp is mounted on a separate volume or partition, and that it is mounted with rw,nosuid,nodev,noexec,relatime flagsensure that /var, /var/log and /var/log/audit are mounted on separates volumes or partitions, and that are mounted with rw,relatime flags.ensure that /home is mounted on a separate partition or volume, and that it is mounted with rw,nodev,relatime flags.include extra kernel boot flag to enable auditing: add audit=1 to GRUB_CMDLINE_LINUX settingdisable kernel support for USB via bootloader configuration: add nousb to GRUB_CMD…

How to deploy TripleO Queens without external network

TripleO Queens has an interesting feature that is called 'composable networks'. It allows to deploy Openstack with the choice of networks that you want, depending on your environment. Please see: https://docs.openstack.org/tripleo-docs/latest/install/advanced_deployment/custom_networks.html

By default, the following networks are defined:
StorageStorage ManagementInternal ApiTenantManagementExternal The external network allows to reach the endpoints externally, and also to define networks to reach the vms externally as well. But to have that, it is needed to have a network with external access, routable, on your lab. Not all labs have it, specially for CI environments, so it may be useful to deploy without it, and just have internal access to endpoints and vms. In this blogpost i'm just going to explain how to achieve it.

First make a copy of your original tripleo-heat-templates, to another directory /home/stack/working-templates, and edit the following files:
network_data.…

Setup an NFS client provisioner in Kubernetes

Setup an NFS client provisioner in Kubernetes One of the most common needs when deploying Kubernetes is the ability to use shared storage. While there are several options available, one of the most commons and easier to setup is to use an NFS server.
This post will explain how to setup a dynamic NFS client provisioner on Kubernetes, relying on an existing NFS server on your systems.
Step 1. Setup an NFS server (sample for CentOS) First thing you will need, of course, is to have an NFS server. This can be easily achieved with some easy steps:

Install nfs package: yum install -y nfs-utils Enable and start nfs service and rpcbind:
systemctl enable rpcbind
systemctl enable nfs-server
systemctl start rpcbind
systemctl start nfs-server
Create the directory that will be shared by NFS, and change the permissions:
mkdir /var/nfsshare
chmod -R 755 /var/nfsshare
chown nfsnobody:nfsnobody /var/nfsshare
 Share the NFS directory over the network, creating the /etc/exports file:
vi /etc/exports
/var/nfsshare …