Skip to main content

Dynamically add Jenkins slaves using Kubernetes

CI is a very interesting area where Kubernetes can be applied. It's a common use case to have a demand of containers to be able to run tests of code changes, or to create some build artifacts.
In this post I will be explaining how to use Kubernetes to spin up Jenkins slaves and connect them to a Jenkins master, allowing to run tests on demand, and have a very easy and fast way to scale depending on our needs.

Pre-requisites needed on Jenkins master

Installing Jenkins plugin

This approach relies on Jenkins Swarm Plugin. So you need to ensure that you have installed this module on your master:
https://wiki.jenkins-ci.org/display/JENKINS/Swarm+Plugin

For Jenkins Swarm Plugin to work, you need to enable CSRF. You need to visit http://url.to.master/configureSecurity/ and check the Prevent Cross Site Request Forgery exploits option.


Configure security

Also consider that Jenkins uses a random port to enable connection to slaves using JNLP. If you are under a protected environment, you may desire to choose a fixed port and enable security rules for it. To achieve that, you need to visit the configureSecurity url again, and set up the port for Jenkins slaves:

After that, add the rules you need on your firewall, to allow connection between Jenkins master and slaves using that port.

Configure credentials

Jenkins Swarm plugin will use jenkins credentials, that needs to be passed to the slave, to connect to the master. So be sure to create some credentials for the Jenkins master domain, based on user/pass. You will need the https://wiki.jenkins-ci.org/display/JENKINS/Credentials+Plugin to achieve it.

How to create a Jenkins slave using Kubernetes

Create the image

First item that we need, is a docker image that is capable to act as a Jenkins Swarm slave, and capable to run some tests on it.
For this purpose, i've created a Docker image that is available at https://hub.docker.com/r/yrobla/jenkins-slave-swarm-infra/ *. This image is based on general Docker java images, using version 8. In top of that, we download the Jenkins Swarm client to be able to connect this node to the Jenkins master, and we also add some extra packages that are needed normally to peform python tests.

* Based on https://hub.docker.com/r/csanchez/jenkins-swarm-slave/

Integrate with Kubernetes

To use this image in Kubernetes to be able to scale on demand, a replication controller that uses this image needs to be created:

https://github.com/kubestack/kubestack/blob/master/jenkins/replication.json

You can create this replication controller using Kubernetes API:
export JENKINS_USER=jenkins_user
export JENKINS_PASS=xxxx
export JENKINS_LABEL=label

kubectl create -f ./replication.json 

* Please note that label is an optional parameter, very useful when you need to limit the jobs that run in this type of container.

Once the replication controller has been created, you will see a new jenkins slave connected to your jenkins master.

Scale it!

Now you can scale the number of your jenkins slaves on demand, using Kubernetes API:

kubectl scale --replicas=5 rc jenkins-slave
 

This command automatically creates and attaches 5 jenkins slaves to your master:

Using this technology gives you a very powerful and easy way to perform your tests or build your artifacts, with the only limit of the minions you are able to provide in your Kubernetes cluster.

Comments

Post a Comment

Popular posts from this blog

Build and use security hardened images with TripleO

Starting to apply since Pike Concept of security hardened images Normally the images used for overcloud deployment in TripleO are not security hardened. It means, the images lack all the extra security measures needed to accomplish with ANSSI requirements. These extra measures are needed to deploy TripleO in environments where security is an important feature.
The following recommendations are given to accomplish with security guidelines:
ensure that /tmp is mounted on a separate volume or partition, and that it is mounted with rw,nosuid,nodev,noexec,relatime flagsensure that /var, /var/log and /var/log/audit are mounted on separates volumes or partitions, and that are mounted with rw,relatime flags.ensure that /home is mounted on a separate partition or volume, and that it is mounted with rw,nodev,relatime flags.include extra kernel boot flag to enable auditing: add audit=1 to GRUB_CMDLINE_LINUX settingdisable kernel support for USB via bootloader configuration: add nousb to GRUB_CMD…

How to deploy TripleO Queens without external network

TripleO Queens has an interesting feature that is called 'composable networks'. It allows to deploy Openstack with the choice of networks that you want, depending on your environment. Please see: https://docs.openstack.org/tripleo-docs/latest/install/advanced_deployment/custom_networks.html

By default, the following networks are defined:
StorageStorage ManagementInternal ApiTenantManagementExternal The external network allows to reach the endpoints externally, and also to define networks to reach the vms externally as well. But to have that, it is needed to have a network with external access, routable, on your lab. Not all labs have it, specially for CI environments, so it may be useful to deploy without it, and just have internal access to endpoints and vms. In this blogpost i'm just going to explain how to achieve it.

First make a copy of your original tripleo-heat-templates, to another directory /home/stack/working-templates, and edit the following files:
network_data.…

Setup an NFS client provisioner in Kubernetes

Setup an NFS client provisioner in Kubernetes One of the most common needs when deploying Kubernetes is the ability to use shared storage. While there are several options available, one of the most commons and easier to setup is to use an NFS server.
This post will explain how to setup a dynamic NFS client provisioner on Kubernetes, relying on an existing NFS server on your systems.
Step 1. Setup an NFS server (sample for CentOS) First thing you will need, of course, is to have an NFS server. This can be easily achieved with some easy steps:

Install nfs package: yum install -y nfs-utils Enable and start nfs service and rpcbind:
systemctl enable rpcbind
systemctl enable nfs-server
systemctl start rpcbind
systemctl start nfs-server
Create the directory that will be shared by NFS, and change the permissions:
mkdir /var/nfsshare
chmod -R 755 /var/nfsshare
chown nfsnobody:nfsnobody /var/nfsshare
 Share the NFS directory over the network, creating the /etc/exports file:
vi /etc/exports
/var/nfsshare …