Skip to main content

Automating local mirrors creation in RHEL



Sometimes there is a need to consume RHEL mirrors locally, not using the Red Hat content delivery network. It may be needed to speed up some deployment, or due to network constraints.

I create an ansible playbook, rhel-local-mirrors (https://github.com/redhat-nfvpe/rhel-local-mirrors), that can help with that.

What does rhel-local-mirrors do?

It is basically a tool that connects to the Red Hat CDN, and syncs the repositories locally, allowing to populate the desired mirrors, that can be accessed by other systems via HTTP.

The playbook is performing several tasks, that can be run together or independently:
  • register a system on the Red Hat Network
  • prepare the system to host mirrors
  • create the specified mirrors
  • schedule automatic updates of the mirrors

How to use it?

  • It is an Ansible playbook, so start by installing it, in any prefered format.
  • Then continue by cloning the playbook:
    git clone https://github.com/redhat-nfvpe/rhel-local-mirrors.git
  • This playbook expects a group of servers called rhel_mirrors to be created. So start creating that entry in /etc/ansible/hosts:
[rhel_mirrors]
127.0.0.1 ansible_connection=local
  •  This playbook ships with a default config, but it expects some values to be populated. It expects a file in ~/.ansible/vars/rhel_local_mirrors_vars.yml to be created, that will contain all  the settings that need to be customized. Following settings are mandatory:
    • rhn_subscription_username
    • rhn_subscription_password
    • rhn_subscription_pool_id
Once you have these files created, you simply can execute the playbook with:
ansible-playbook  -vvvv site.yml -i /etc/ansible/hosts 

With these settings populated, the playbook will just create a mirror called rhel.repo, that will contain rhel-7-server-rpms, rhel-7-server-extras-rpms, rhel-7-server-rh-common-rpms, rhel-ha-for-rhel-7-server-rpms repos. It will be placed on /var/www/mirrors/rhel_repo folder, and can be consumed externally by all systems that have HTTP access to your host.

Steps and customization

Register a system in the local mirror

In order to access the content from the Red Hat CDN, the instance where you run this playbook needs to be registered on the system. You need to define the following vars in ansible, that need to match with the data from your RHEL account:
  • rhn_subscription_username
  • rhn_subscription_password
  • rhn_subscription_pool_id
If you manage subscriptions on your own, you can disable this step by setting the subscribe_rhn var to False.

This task will collect all the repositories that you defined for your mirrors and will enable those on your system.

To limit the playbook just to execute this register step, you can run it with: --tags rhel_register.

Prepare the system to host mirrors

This step will install all the packages needed to perform the mirror creation ( createrepo and yum-utils packages), and will install and configure the nginx service that will help to populate the mirrors and serve them via HTTP. It will also open the HTTP port in your system, if you have iptables or firewalld active.
To limit the playbook just to execute this register step, you can run it with: --tags prepare_system.

Mirror creation

This step will perform the actual mirror creation, based on your settings. You can define several mirrors and the playbook will automate the creation for all of those independently.
To define the mirrors, you need to use the mirrors var, that expects a list of dictionaries with the name, folder and items keys:

mirrors:
  - name: osp8.repo
    folder: osp8_repo
    items:
        - rhel-7-server-rpms
        - rhel-7-server-extras-rpms
        - rhel-7-server-rh-common-rpms
        - rhel-ha-for-rhel-7-server-rpms
        - rhel-7-server-openstack-8-rpms
        - rhel-7-server-openstack-8-director-rpms
        - rhel-7-server-rhceph-2-osd-rpms
        - rhel-7-server-rhceph-2-mon-rpms
        - rhel-7-server-rhceph-2-tools-rpms
  - name: osp9.repo
    folder: osp9_repo
    items:
        - rhel-7-server-rpms
        - rhel-7-server-extras-rpms
        - rhel-7-server-rh-common-rpms
        - rhel-ha-for-rhel-7-server-rpms
        - rhel-7-server-openstack-9-rpms
        - rhel-7-server-openstack-9-director-rpms
        - rhel-7-server-rhceph-2-osd-rpms
        - rhel-7-server-rhceph-2-mon-rpms
        - rhel-7-server-rhceph-2-tools-rpms
  - name: osp10.repo
    folder: osp10_repo
    items:
        - rhel-7-server-rpms
        - rhel-7-server-extras-rpms
        - rhel-7-server-rh-common-rpms
        - rhel-ha-for-rhel-7-server-rpms
        - rhel-7-server-openstack-10-rpms
        - rhel-7-server-rhceph-2-osd-rpms
        - rhel-7-server-rhceph-2-mon-rpms
        - rhel-7-server-rhceph-2-tools-rpms


After running this task, you will end with a /var/www/mirrors/general_mirror folder (that will contain all the repositories defined for all mirrors), and a set of individual folders (/var/www/mirrors/<<mirror.folder>>), that will contain just the defined repos for each mirror, along with a <<mirror.name>> file, that will define the contents of the repository, and that you can reuse to consume these mirrors from outer systems:

|--- /var/www/mirrors
|--- |--- general_mirror
|--- |--- osp8_repo
|--- |--- |--- osp8.repo
|--- |--- osp9_repo
|--- |--- |--- osp9.repo
|--- |--- osp10_repo
|--- |--- |--- osp10.repo

A sample <<mirror.name>>.repo file will have this content:

[rhel-7-server-extras-rpms]
name=rhel-7-server-extras-rpms
baseurl=http://10.10.0.13/pub/osp10_repo/rhel-7-server-extras-rpms/
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release


[rhel-7-server-openstack-10-rpms]
name=rhel-7-server-openstack-10-rpms
baseurl=http://10.10.0.13/pub/osp10_repo/rhel-7-server-openstack-10-rpms/
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release


[rhel-7-server-openstack-8-director-rpms]
name=rhel-7-server-openstack-8-director-rpms
baseurl=http://10.10.0.13/pub/osp10_repo/rhel-7-server-openstack-8-director-rpms/
enabled=0
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release


[rhel-7-server-openstack-8-rpms]
name=rhel-7-server-openstack-8-rpms
baseurl=http://10.10.0.13/pub/osp10_repo/rhel-7-server-openstack-8-rpms/
enabled=0
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release


[rhel-7-server-openstack-9-director-rpms]
name=rhel-7-server-openstack-9-director-rpms
baseurl=http://10.10.0.13/pub/osp10_repo/rhel-7-server-openstack-9-director-rpms/
enabled=0
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release


[rhel-7-server-openstack-9-rpms]
name=rhel-7-server-openstack-9-rpms
baseurl=http://10.10.0.13/pub/osp10_repo/rhel-7-server-openstack-9-rpms/
enabled=0
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release


[rhel-7-server-rh-common-rpms]
name=rhel-7-server-rh-common-rpms
baseurl=http://10.10.0.13/pub/osp10_repo/rhel-7-server-rh-common-rpms/
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release


[rhel-7-server-rhceph-2-mon-rpms]
name=rhel-7-server-rhceph-2-mon-rpms
baseurl=http://10.10.0.13/pub/osp10_repo/rhel-7-server-rhceph-2-mon-rpms/
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release


[rhel-7-server-rhceph-2-osd-rpms]
name=rhel-7-server-rhceph-2-osd-rpms
baseurl=http://10.10.0.13/pub/osp10_repo/rhel-7-server-rhceph-2-osd-rpms/
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release


[rhel-7-server-rhceph-2-tools-rpms]
name=rhel-7-server-rhceph-2-tools-rpms
baseurl=http://10.10.0.13/pub/osp10_repo/rhel-7-server-rhceph-2-tools-rpms/
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release


[rhel-7-server-rpms]
name=rhel-7-server-rpms
baseurl=http://10.10.0.13/pub/osp10_repo/rhel-7-server-rpms/
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release


[rhel-ha-for-rhel-7-server-rpms]
name=rhel-ha-for-rhel-7-server-rpms
baseurl=http://10.10.0.13/pub/osp10_repo/rhel-ha-for-rhel-7-server-rpms/
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release


To limit the playbook just to execute this mirror creation step, you can run it with: --tags create_mirror.

Repository synchronization 

When  using local repositories, it is important to have them on sync. This playbook offers the possibility of setting up a cronjob that will trigger the sync automatically, on the desired period. To enable it, you need to set the repo_autosync var to True. It then will use the following vars:
  • crontab_day
  • crontab_hour
  • crontab_minute
  • crontab_month
  • crontab_weekday
It will install a cronjob that will trigger reposync for all the defined repositories, with the settings specified in the crontab_* vars. You can always remove the cron, running the playbook with repo_autosync var set to False.

To limit the playbook just to execute this repo synchronization step, you can run it with: --tags prepare_cron.

Contributing

Several features can be added to this playbook, such as improving the registration system, or adding more features to mirror creation.
If you want to contribute, please send your pull requests to https://github.com/redhat-nfvpe/rhel-local-mirrors

Comments

Popular posts from this blog

Enable UEFI PXE boot in Supermicro SYS-E200

When provisioning my Supermicro SYS-E200-8D machines (X10 motherboard), i had the need to enable UEFI boot mode, and provision through PXE. This may seem straightforward, but there is a set of BIOS settings that need to be changed in order to enable it. First thing is to enable EFI on LAN , and enable Network Stack. To do that, enter into BIOS > Advanced > PCIe/PCI/PnP configuration and check that your settings match the following: See that PCI-E have EFI firmware loaded. Same for Onboard LAN OPROM and Onboard Video OPROM. And UEFI Network stack is enabled , as well as IPv4 PXE/IPv6 PXE support. Next thing is to modify boot settings. The usual boot order for PXE is to first add hard disk and second PXE network . The PXE tools (for example Ironic) will set a temporary boot order for PXE (one time) to enable the boot from network, but then the reboot will be done from hard disk. So be sure that your boot order matches the following: See that the first order is hard d

Test API endpoint with netcat

Do you need a simple way to validate that an API endpoint is responsive, but you don't want to use curl? There is a simple way to validate the endpoint with nc, producing an output that can be redirected to a logfile and parsed later: URL=$1 PORT=$2 while true; do     RESULT=$(nc -vz $URL $PORT 2>&1)     DATE=$(date)     echo $DATE $RESULT     sleep 1 done You can all this script with the API URL as first parameter, and API port as the second. netcat will be accessing to that endpoint and will report the results, detecting when the API is down. We also can output the date to have a reference when failures are detected. The produced output will be something like: vie jun 26 08:19:28 UTC 2020 Ncat: Version 7.70 ( https://nmap.org/ncat ) Ncat: Connected to 192.168.111.3:6443. Ncat: 0 bytes sent, 0 bytes received in 0.01 seconds. vie jun 26 08:19:29 UTC 2020 Ncat: Version 7.70 ( https://nmap.org/ncat ) Ncat: Connected to 192.168.111.3:6443. Ncat: 0 bytes sent, 0 bytes

Setup an NFS client provisioner in Kubernetes

Setup an NFS client provisioner in Kubernetes One of the most common needs when deploying Kubernetes is the ability to use shared storage. While there are several options available, one of the most commons and easier to setup is to use an NFS server. This post will explain how to setup a dynamic NFS client provisioner on Kubernetes, relying on an existing NFS server on your systems. Step 1. Setup an NFS server (sample for CentOS) First thing you will need, of course, is to have an NFS server. This can be easily achieved with some easy steps: Install nfs package: yum install -y nfs-utils Enable and start nfs service and rpcbind: systemctl enable rpcbind systemctl enable nfs-server systemctl start rpcbind systemctl start nfs-server Create the directory that will be shared by NFS, and change the permissions: mkdir /var/nfsshare chmod -R 755 /var/nfsshare chown nfsnobody:nfsnobody /var/nfsshare  Share the NFS directory over the network, creating the /etc/exports file: vi /