Highly available Load-balancer for Kubernetes Cluster On-Premise – II

In the first post of this series, haproxy and keepalived installed, configured and tested.

In this post, two stateless Kubernetes web application will be deployed and domain names will be registered to DNS for these two web applications to test if Load-balancer is working as expected.

Note: For my home-lab, I am using the domain nordic.io.

For the Kubernetes cluster, I am assuming that, nginx Ingress controller deployed as DaemonSet and listening on port 80 and port 443 on each worker node.

Deploying Kubernetes Web Applications:

apiVersion: v1
kind: Service
metadata:
  name: hello-kubernetes-svc
  namespace: default
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 8080
  selector:
    app: hello-kubernetes
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-kubernetes
spec:
  replicas: 3
  selector:
    matchLabels:
      app: hello-kubernetes
  template:
    metadata:
      labels:
        app: hello-kubernetes
    spec:
      containers:
      - name: hello-kubernetes
        image: paulbouwer/hello-kubernetes:1.8
        ports:
        - containerPort: 8080
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: hello-kubernetes-ingress
  annotations:
    ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - host: helloworld.nordic.io  
    http:
      paths:
        - path: /
          backend:
            serviceName: hello-kubernetes-svc
            servicePort: 80

apiVersion: v1
kind: Service
metadata:
  name: whoami-svc
  namespace: default
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    run: whoami
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    run: whoami
  name: whoami
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      run: whoami
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      labels:
        run: whoami
    spec:
      containers:
      - image: yeasy/simple-web:latest
        name: whoami
      restartPolicy: Always
      schedulerName: default-scheduler
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: whoami-ingress
  annotations:
    ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - host: whoami.nordic.io  
    http:
      paths:
        - path: /
          backend:
            serviceName: whoami-svc
            servicePort: 80

Registering Web Apps to DNS:

Adding DNS Records one of the curial part. In order to use single Load Balancer IP to multiple services we are adding CNAME record. You can see bind dns configuration below to make it.

vip1 IN A 10.5.100.50
helloworld IN CNAME vip1
whoami IN CNAME vip1

Experiment:

Checking DNS Records.

[tesla@deployment ~]$ nslookup helloworld
Server:		10.5.100.253
Address:	10.5.100.253#53

helloworld.nordic.io	canonical name = vip1.nordic.io.
Name:	vip1.nordic.io
Address: 10.5.100.50

[tesla@deployment ~]$ nslookup whoami
Server:		10.5.100.253
Address:	10.5.100.253#53

whoami.nordic.io	canonical name = vip1.nordic.io.
Name:	vip1.nordic.io
Address: 10.5.100.50

Testing Services:

Hello World App:

Whoami App:

Highly available Load-balancer for Kubernetes Cluster On-Premise – I

In this post, we are going to build highly available HAProxy Load-balancer for our Kubernetes cluster on-premise. For this, HaProxy will be used for external Load-balancer which takes the requests from outside world sends them to Kubernetes worker nodes on which nginx ingress controller listens incoming requests on port 80 and 443.

Another curial software component is Keepalived which provides a highly available HAProxy load-balancer, in case of any of HAProxy loadbalancer is down.

Keepalived is a Robust Virtual Router Redundancy Protocal (VRRP )implementation in GNU/Linux

To build cluster, Ubuntu 18.04.4 used. You can see below diagram how environment looks like.

Installing necessary Software Suits

# sudo apt-get install haproxy
# sudo apt-get install keepalived
# sudo systemctl enable haproxy
# sudo systemctl enable keepalived

Configuring Necessary Kernel Parameters

The below configuration is very important to implement it on both nodes.

In order for the Keepalived service to forward network packets properly to the real servers, each node must have IP forwarding turned on in the kernel. Log in as root and change the line which reads net.ipv4.ip_forward = 0 in /etc/sysctl.conf to the following:

net.ipv4.ip_forward = 1

The changes take effect when you reboot the system. Load balancing in HAProxy and Keepalived at the same time also requires the ability to bind to an IP address that are nonlocal, meaning that it is not assigned to a device on the local system. This allows a running load balancer instance to bind to an IP that is not local for failover. To enable, edit the line in /etc/sysctl.conf that reads net.ipv4.ip_nonlocal_bind to the following:

net.ipv4.ip_nonlocal_bind = 1

The changes take effect when you reboot the system.

Configuring Keepalived on both nodes

Some keepalived settings has to be changed accordingly in the second node. So, check the commented lines in the keepalived.conf

global_defs {
   notification_email {
     admin@manintheit.org
   }
   notification_email_from keepalived@manintheit.org
   smtp_server localhost
   smtp_connect_timeout 30
   router_id ha1 #router_id ha2 on the second node(ha2)
   vrrp_skip_check_adv_addr
   vrrp_garp_interval 0.5
   vrrp_garp_master_delay 1
   vrrp_garp_master_repeat 5
   vrrp_gna_interval 0
   enable_script_security
   script_user root
   vrrp_no_swap
   checker_no_swap
}

# Script used to check if HAProxy is running
vrrp_script check_haproxy {
       script "/usr/bin/pgrep haproxy 2>&1 >/dev/null"
        interval 1
        fall 2
        rise 2
}

# Virtual interface
vrrp_instance VI_01 {
        state MASTER #state BACKUP on the second node(ha2)
        interface enp1s0
        virtual_router_id 120
        priority 101  #priority 100 on the second node(ha2). Higher number wins.
        nopreempt
        advert_int 1
        unicast_src_ip 10.5.100.51  #unicast_src_ip 10.5.100.52 on the second node(ha2)
        unicast_peer {
                10.5.100.52    #unicast_peer 10.5.100.51 on the second (ha2)
        }
        virtual_ipaddress {
                10.5.100.50/24 dev enp1s0 label enp1s0:ha-vip1
        }
        authentication {
                auth_type PASS
                auth_pass MANINTHEIT
        }
        track_script {
                check_haproxy
        }
}

HAProxy Config

#Default configurations in the haproxy.cfg has been omitted.

frontend stats
    bind 10.5.100.51:9000
    mode http
    maxconn 10
    stats enable
    stats show-node
    stats hide-version
    stats realm Haproxy\ Statistics
    stats uri /hastats
    stats auth haadmin:haadmin

frontend k8s-service-pool
  mode tcp
  bind 10.5.100.50:80
  default_backend k8s-service-backend

backend k8s-service-backend
    mode tcp
    balance source
    server k8s-worker-01 10.5.100.21 check port 80 inter 10s rise 1 fall 2 
    server k8s-worker-02 10.5.100.22 check port 80 inter 10s rise 1 fall 2
    server k8s-worker-03 10.5.100.23 check port 80 inter 10s rise 1 fall 2

Restart the service keepalived and haproxy on both nodes.

# sudo systemctl restart haproxy
# sudo systemctl restart keepalived

Experiment:

1- Lets check with tcpdump utility if master node sends VRRP advertisement packets to every second to all members of VRRP group.

# tcpdump proto 112 -n

2- Lets check interface IPs; As you see, first node(ha1) looks active node as it register to VIP 10.5.100.50.

Second Part of the Post will be published soon, Stay healthy!

Hourly logrotation

Sometimes, you need to rotate your logs hourly instead of daily or weekly, if you have a big virtual environment many things needs to be logged. Sometimes daily logs are so huge that you need hourly log-rotation. For this one you need to customize some of the settings in your central syslog server. You can find the sample steps below to create logrotate configuration that rotates the logs hourly.

Steps:

1 – Copy /etc/cron.daily/logrotate to /etc/cron.hourly and set it as executable.

# cp /etc/cron.daily/logrotate /etc/cron.hourly
# chmod u+x /etc/cron.hourly/logrotate

2- Create a folder logrotate.hourly.conf in /etc

# mkdir -p /etc/logrotate.hourly.conf

3- Modify the file logrotate in the /etc/cron.hourly based on your needs. See below for sample.

#!/bin/sh
/usr/sbin/logrotate /etc/logrotate.hourly.conf/example
EXITVALUE=$?
if [ $EXITVALUE != 0 ]; then
    /usr/bin/logger -t logrotate "ALERT exited abnormally with [$EXITVALUE]"
fi
exit $EXITVALUE

3- Create your logrotate script in the folder /etc/logrotate.hourly.conf

For this post, we named it as ‘example’. (We also specified it in the configuration /etc/logrotate.hourly.conf/logrotate)

/opt/logs/[2-9][0-9][0-9][0-9]/[0-9][0-9]/[0-9][0-9]/*.log {
    notifempty
    compress
    maxage 60
    rotate 200
    create 0600 root root
    size 4G
    postrotate
        /usr/bin/systemctl reload syslog-ng > /dev/null
    endscript
}

  • You may need to tune up rotate, size and maxage options base on your needs.

LACP Configuration with Cumulux VX Virtual Appliances

In this post LACP will be configured on Cumulus VX virtual appliance. Test simulated on GNS3 Network simulation.

Sample Topology

This is the sample network topology to test LACP.

Configuration on Both Virtual Appliances for LACP

In this configuration, swp1 and swp2 ports used as a slave ports of LAGG0 on both switches.

cumulus@cumulus:~$ net add bond LAGG0 bond slaves swp1,2
cumulus@cumulus:~$ net add bond LAGG0 bond mode 802.3ad
cumulus@cumulus:~$ net add bond LAGG0 bond lacp-rate slow
cumulus@cumulus:~$ net add bond LAGG0 bond miimon 100

Configuration on Both Virtual Appliances for VLAN configuration

In below configuration, VLAN 10 added to bridge(on cumulus vlan-aware bridge) and swp3 configured as an access port with VLAN id 10.

cumulus@cumulus:~$ net add bridge bridge vlan-protocol 802.1ad
cumulus@cumulus:~$ net add bridge bridge ports LAGG0
cumulus@cumulus:~$ net add bridge bridge ports swp3
cumulus@cumulus:~$ net add bridge bridge vids 10
cumulus@cumulus:~$ net add interface swp3 bridge access 10

Experiment

It is also tested that after one of the link has been cut, host still able to ping to other end without any package drop.

Run VMware PowerCLI on your Docker Container

If you have a vSphere environment in your Infrastructure and want to automate provisioning steps, you are most likely need PowerCLI which mostly requires Windows Guest or supported GNU/Linux guest on your environment. By the way, Terraform also very nice tool to provision new virtual machine, but it is very limited.

If you stumble upon the same case above, this post for you. Actually you do not need any GNU/Linux or Window guest but docker. If you have a docker environment you can pull vmware/powerclicore image, run it in your docker environment and connect your vcenter server in that docker container.

I used it to set boot priority of virtual guests. One of the good thing with docker is that you can also pass your environment variables inside the container to use that variables.

docker run  --rm -it --entrypoint="/usr/bin/pwsh"  \
   -e VI_SERVER=${TF_VAR_vsphere_server} \
   -e VI_USERNAME=${TF_VAR_vsphere_user} \
   -v ${pwd}/scripts:/tmp/scripts vmware/powerclicore /tmp/scripts/setboot-priority.ps1

What above shell script does that, it instantiates docker image vmware/powerclicore mounts <pwd>/scripts/ folder(where config.txt and setboot-priority.ps1 lives) on the host to /tmp/scripts in the container and run setboot-priority.ps1 in the container.

You can get setboot-priority.ps1 script from my github repo.

Creating a Image for MaaS with Packer

In this post we are going to build an image with Packer which will be used to deploy via MaaS. After image built and uploaded to MaaS, it can be used to provision virtual machine or deploy OS on Bare-Metal machines. In order to build an image that is deployable with MaaS, we need couple of files which you can clone here.

For this post, It will be created minimal CentOS7 image including httpd package to test. One of the cool thing is with Packer that you can also run execute your Ansible playbook inside the machine being provisioned by Packer. Ansible(remote) provisioner is used to configure ntp server and install httpd.

As Qemu used as a builder, qemu-system-x64 has to be installed on the host where Packer runs. Packer will create an qcow2 image after successful image creation. But we are going to use tar.gz image file as we deploy image via MaaS.

centos7.json (do not forget to change iso_url in accordance with your environment.)

{
    "builders": [
        {
            "type": "qemu",
	    "iso_url": "/home/tesla/packer/centos7/isos/CentOS-7-x86_64-NetInstall-2003.iso",
            "iso_checksum_type": "sha256",
	    "iso_checksum": "101bc813d2af9ccf534d112cbe8670e6d900425b297d1a4d2529c5ad5f226372",
            "boot_command": [
                "<tab> ",
                "inst.ks=http://{{ .HTTPIP }}:{{ .HTTPPort }}/centos7.ks ",
                "<enter>"
            ],
	    "ssh_username": "tesla",
	    "ssh_password": "tesla",
	    "ssh_wait_timeout": "12000s",
            "boot_wait": "3s",
            "disk_size": "4G",
	    "display": "none",
            "headless": false,
            "memory": 4096,
	    "accelerator": "kvm",
	    "cpus": 4,
            "http_directory": "http",
            "shutdown_timeout": "20m",
	    "disk_interface": "virtio",
            "format": "qcow2",
            "net_device": "virtio-net"

        }
    ],

"post-processors": [
        {
            "type": "shell-local",
            "inline_shebang": "/bin/bash -e",
            "inline": [
                "TMP_DIR=$(mktemp -d /tmp/packer-maas-XXXX)",
                "echo 'Mounting image...'",
                "modprobe nbd",
                "qemu-nbd -d /dev/nbd4",
                "qemu-nbd -c /dev/nbd4 -n output-qemu/packer-qemu",
                "echo 'Waiting for partitions to be created...'",
                "tries=0",
                "while [ ! -e /dev/nbd4p1 -a $tries -lt 60 ]; do",
                "    sleep 1",
                "    tries=$((tries+1))",
                "done",
                "echo 'Tarring up image...'",
                "mount /dev/nbd4p1 $TMP_DIR",
                "tar -Sczpf centos7.tar.gz --selinux -C $TMP_DIR .",
                "echo 'Unmounting image...'",
                "umount $TMP_DIR",
                "qemu-nbd -d /dev/nbd4",
                "rmdir $TMP_DIR"
            ]
        }
    ],
    "provisioners": [
    {
  "type": "shell",
  "pause_before": "5s",
  "inline": [
	"sudo yum -y install epel-release",
	"sudo yum -y update",
	"sudo yum -y remove cloud-init",
	"sudo yum -y install python-jsonschema python-devel",
	"sudo yum -y install cloud-init --disablerepo=* --enablerepo=group_cloud-init-el-stable",
	"sudo yum -y install qemu-guest-agent wget"
  ]
},
    {
      "user": "tesla",
      "type": "ansible",
      "playbook_file": "./ansible/main.yml"
    },
    {
     "type": "shell",
     "inline": [
     
	"sudo systemctl enable cloud-init",
	"sudo rm -rf /var/lib/cloud/",
	"sudo rm -rf /etc/cloud/cloud-init.disabled",
	"sudo /usr/bin/truncate -s 0 /etc/fstab",
	"sudo /usr/bin/truncate -s 0 /etc/resolv.conf",
        "sudo rm -f /etc/sysconfig/network-scripts/ifcfg-[^lo]*",
        "sudo sync"
  ]
}
    ]
}
#main.yml
---
- hosts: default
  become: yes
  roles:
    - configure_httpd
    - configure_chrony

centos7.ks

url --mirrorlist="http://mirrorlist.centos.org/?release=7&arch=x86_64&repo=os"
text
reboot
firewall --enabled --service=ssh,http
firstboot --disable
ignoredisk --only-use=vda
lang en_US.UTF-8
keyboard us
network --bootproto=dhcp --hostname=packer-maas.manintheit.org
selinux --enforcing
timezone UTC --isUtc
bootloader --location=mbr --driveorder="vda" --timeout=1
rootpw --plaintext root1234
user --name=tesla --groups=wheel --plaintext --password=tesla


repo --name="Base" --mirrorlist="http://mirrorlist.centos.org/?release=7&arch=x86_64&repo=os"
repo --name="Updates" --mirrorlist="http://mirrorlist.centos.org/?release=7&arch=x86_64&repo=updates"
repo --name="Extras" --mirrorlist="http://mirrorlist.centos.org/?release=7&arch=x86_64&repo=extras"
repo --name="cloud-init" --baseurl="http://copr-be.cloud.fedoraproject.org/results/@cloud-init/el-stable/epel-7-x86_64"

zerombr
clearpart --all --initlabel
part / --size=1 --grow --asprimary --fstype=ext4


%packages
@core
#httpd
bash-completion
cloud-init-el-release
cloud-init
# cloud-init only requires python-oauthlib with MAAS. As such upstream
# has removed python-oauthlib from cloud-init's deps.
python2-oauthlib
cloud-utils-growpart
rsync
tar
yum-utils
# bridge-utils is required by cloud-init to configure networking. Without it
# installed cloud-init will try to install it itself which will not work in
# isolated environments.
bridge-utils
# Tools needed to allow custom storage to be deployed without acessing the
# Internet.
grub2-efi-x64
shim-x64
# Older versions of Curtin do not support secure boot and setup grub by
# generating grubx64.efi with grub2-efi-x64-modules.
grub2-efi-x64-modules
efibootmgr
dosfstools
lvm2
mdadm
device-mapper-multipath
iscsi-initiator-utils
-plymouth
# Remove ALSA firmware
-a*-firmware
# Remove Intel wireless firmware
-i*-firmware
%end


%post --erroronfail
systemctl disable cloud-init
touch /etc/cloud/cloud-init.disabled
yum install -y sudo
echo "tesla        ALL=(ALL)       NOPASSWD: ALL" >> /etc/sudoers.d/tesla
sed -i "s/^.*requiretty/#Defaults requiretty/" /etc/sudoers
yum clean all
%end

Validating Packer config.

sudo PACKER_LOG=1 packer validate centos7.json

After successful validation, we can start building machine image.

sudo PACKER_LOG=1 packer build centos7.json

Once packer finished successfully, centos7.tar.gz file should be created, which will be uploaded to MaaS to provision VMs or Install OS in Bare-Metal servers.

Uploading image to MaaS.

maas $PROFILE boot-resources create name="centos/centos7Packer1" architecture=amd64/generic content@=centos7.tar.gz

Important Takeaways on Packer.

Do not forget to issue sync command inside machine provisioned by Packer. Otherwise, you will experience with the empty systemd unit files. So, sync command will flush data from cache to disk.

As MaaS uses cloud-init to setup various host settings such as partitioning disk, dns server, network configuration etc. Some of the files are removed or truncated prior to image creation. Because MaaS will populate these configurations during the first boot of the system. If you deploy the OS via MaaS, configure dns setting on the MaaS, it will be anyway overwritten by cloud-init even, you configure resolv.conf inside the image.

Provisioning a VM on KVM via Kickstart using virt-install

virt-install is a command line tool for creating new KVM , Xen, or Linux container guests using the “libvirt” hypervisor management library. It is one of the quickest way to deploy a vm from the command line. In this post I will also show you to install CentOS on KVM via kickstart. In this installation instead of choosing native GNU/Linux bridge we are using Open Vswitch.

For your environment, I am assuming that you already configured your dhcp, dns and http server environment for the pxeboot. I am using cobbler for dhcp server management. I am provisioning CentOS machines for installation of Kubernetes Cluster nodes. As I use remote KVM host, user tesla has to able to connect with SSH key authentication and the user tesla has to be in the group libvirt.

Provisioning script.

virt-install \
--connect qemu+ssh://tesla@192.168.122.1/system \
--name k8s-master \
--ram 2048 \
--disk bus=virtio,pool=KVMGuests,size=15,format=qcow2 \
--network network=OVS0,model=virtio,virtualport_type=openvswitch,portgroup=VLAN100 \
--vcpus 2 \
--os-type linux \
--location http://cobbler.manintheit.org/cblr/links/CentOS7-x86_64 \
--os-variant rhel7 \
--extra-args="ks=http://10.5.100.253/k8s/k8s-master-centos7.ks ksdevice=eth0 ip=10.5.100.15 netmask=255.255.255.0 dns=10.5.100.253 gateway=10.5.100.254" 
--location Distribution tree installation source. virt-install can recognize certain distribution trees and fetches a bootable kernel/initrd pair to launch the install.

k8s-master-centos7.ks

install
text
eula --agreed
url --url=http://10.5.100.253/cblr/links/CentOS7-x86_64/
lang en_US.UTF-8
keyboard us
network --onboot=on --bootproto=static  --ip 10.5.100.15 --netmask 255.255.255.0 --gateway 10.5.100.254 --nameserver 10.5.100.253 --device=eth0 --hostname k8s-master.manintheit.org
rootpw root
firewall --disabled
selinux --permissive
timezone Europe/Berlin
skipx
zerombr
clearpart --all --initlabel
part /boot --fstype ext4 --size=512
part /     --fstype ext4 --size=1 --grow
authconfig --enableshadow --passalgo=sha512
services --enabled=NetworkManager,sshd
reboot
user --name=tesla --plaintext --password tesla --groups=tesla,wheel

#repo --name=base --baseurl=http://mirror.centos.org/centos/7.3.1611/os/x86_64/
#repo --name=epel-release --baseurl=http://anorien.csc.warwick.ac.uk/mirrors/epel/7/x86_64/
#repo --name=elrepo-kernel --baseurl=http://elrepo.org/linux/kernel/el7/x86_64/
#repo --name=elrepo-release --baseurl=http://elrepo.org/linux/elrepo/el7/x86_64/
#repo --name=elrepo-extras --baseurl=http://elrepo.org/linux/extras/el7/x86_64/

%packages --ignoremissing --excludedocs
@Base
%end

%post
yum update -y
yum install -y sudo
echo "tesla        ALL=(ALL)       NOPASSWD: ALL" >> /etc/sudoers.d/tesla
sed -i "s/^.*requiretty/#Defaults requiretty/" /etc/sudoers
/bin/echo 'UseDNS no' >> /etc/ssh/sshd_config
yum clean all
/bin/sh -c 'echo "br_netfilter" > /etc/modules-load.d/br_netfilter.conf'
modprobe br_netfilter
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

#Enable kubernetes repo
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
%end

MaaS mac filtering

Are you using MaaS for bare-metal server or VM deployments? And you need some mac filtering that you want MaaS to ignore dhcp discovery packages which comes from particular MAC address(es) ? If so, you can simply add similar snippet of isc-dhcp configuration to your MaaS. (Settings > Dhcp Snippets) set the scope as Global.

class "black-list" 
{    
	match substring (hardware, 1, 6);      
	ignore booting;
}
subclass "black-list" 00:10:9b:8f:31:71;
subclass "black-list" 00:10:9b:8f:31:78;

After adding, enabling and saving it. MaaS will not send any PXEboot reply to system that has the MAC addresses above.

Happy Deployment 🙂

KVM provisioning with Jenkins and Terraform(Cloud-init)

In previous post, we provisioned the guest on KVM. It is just provisioning the guest from the template… There is no auto configuration of the guest such as hostname, name server, ip configuration that needs to be automated as well.

In order to automate configuration of above settings on boot time. we are going to use cloud-init.

What is cloud-init?

Cloud-init is the service that is installed inside the instance and cloud-config are a set of scripts that are executed as soon as the instance is started. Cloud-config is the language of the scripts that cloud-init knows to execute. Cloud-init runs on Linux workloads; for Microsoft Windows workloads, the equivalent is CloudBase-init which supports the majority of cloud-config parameters. The service on the operating system starts early at boot, retrieves metadata that has been provided from an external provider (metadata) or by direct user data supplied by the user. Reference:

cloud-init is run only the first time that the machine is booted. If cloud-init fails because of syntax errors in the file or doesn’t contain all of the needed directives, such as user credentials, a new instance must be created and launched. Restarting the failed instance with a new cloud-init file will not work. Reference

Lets start with creating a folder structure.

libvirt-cinit
├── cloud_init.cfg
├── libvirt-cinit.tf
├── network_config.cfg
├── terraform.d
│   └── plugins
│   └── linux_amd64
│   └── terraform-provider-libvirt

#libvirt-cinit.tf

provider "libvirt" {
    uri = "qemu+ssh://tesla@oregon.anatolia.io/system"
}

resource "libvirt_volume" "centos71-test" {
  name = "centos71-test"
  format = "qcow2"
  pool = "KVMGuests"
  #qcow2 will be cloud-init compatible qcow2 disk image
  #https://cloud.centos.org/centos/7/images/
  source =  "http://oregon.anatolia.io/qcow2-repo/centos71-cloud.qcow2"
}


data "template_file" "user_data" {
 template = file("${path.module}/cloud_init.cfg")
}

data "template_file" "meta_data" {
 template = file("${path.module}/network_config.cfg")
}

resource "libvirt_cloudinit_disk" "cloudinit" {
  name           = "cloudinit.iso"  
  user_data      = data.template_file.user_data.rendered
  meta_data      = data.template_file.meta_data.rendered
  pool           = "KVMGuests"
}


resource "libvirt_domain" "centos71-test" {
 autostart = "true"
  name   = "centos71-test"
  memory = "2048"
  vcpu   = 2
  running = "true"
  cloudinit = libvirt_cloudinit_disk.cloudinit.id

  network_interface {
      network_name = "default"
  }

 disk {
       #scsi = "true"
       volume_id = libvirt_volume.centos71-test.id
  }
console {
    type        = "pty"
    target_type = "virtio"
    target_port = "1"
  }

}

cloud_init.cfg

it is a yaml file (AKA user-data) . Which contains all guest related settings. Such as hostname, users, ssh-keys etc. this file injected into the cloudinit ISO

it MUST be start with the section #cloud-config.

#cloud-config
hostname: centos71-test
fqdn: centos71-test.anatolia.io
manage_etc_hosts: True
users:
  - name: gokay
    sudo: ALL=(ALL) NOPASSWD:ALL
    gecos: Gokay IT user
    lock_passwd: False
    ssh_pwauth: True
    chpasswd: { expire: False }
    passwd: $6$kOh5qSjBzbBfw9Rz$Y5bxvHvA637lSmancdrc17072tdVTpuk8hJ9CX4GV8pvvZXQ/Bv3y8ljY9KjJtLPg6hsyrqe4OHdvAlzFKae/0 
    shell: /bin/bash
  - name: eda
    gecos: Eda SEC user
    lock_passwd: False
    ssh_pwauth: True
    chpasswd: { expire: False }
    passwd: $6$llELUDypCmO.oC9Q$QjykXeZQxJ7hxndJaIQMvxewG3Mb05zHn5NlA8Nf17wd5FXLr6W3e3V.bhHVNmVL3nBGirGPy66FrEV88cI2Q0 
    shell: /bin/bash
runcmd:
  - ifdown eth0
  - sleep 1
  - ifup eth0
#passwords
#gokay:gokay123i
#eda:edaa

network_config.cfg(Static IP)

it is a another configuration for the network configuration. For Red Hat GNU/Linux netplan is not the service. Because of that we are going to user the format below. One thing this is very important according to the Red Hat that configuration must be put into the meta-data. Because of the bug in the cloud-init.

#for virtio nic ethX
#for en100X nic ensX
network-interfaces: |
  auto eth0
  iface eth0 inet static
  address 192.168.122.102
  network 192.168.122.0
  netmask 255.255.255.0
  broadcast 192.168.122.255
  gateway 192.168.122.1
  dns-nameservers 192.168.122.1 8.8.8.8
#bootcmd:
#  - ifdown eth0
#  - ifup eth0

How do I set up a static networking configuration?

Add a network-interfaces section to the meta-data file. This section contains the usual set of networking configuration options.

Because of a current bug in cloud-init, static networking configurations are not automatically started. Instead the default DHCP configuration remains active. A suggested work around is to manually stop and restart the network interface via the bootcmd directive. Reference

As we are using Terraform, it will generate the iso media by injecting user-data and meta-data. if you need to do it manually you can also use the command below.

sudo cloud-localds -v config.iso config.yaml network.cfg

Above sample command creates a config.iso by injecting config.yaml and network.cfg.

You need to install cloud-utils package in order to use cloud-localds

Experiments:

Jenkinsfile configured to get from the git repository. It is the same as previous Jenkinsfile. I could not figure out how to pull the binary file on git. Because of that, I used fixed path for the Terraform provider which is binary file.

pipeline{
    agent {label 'master'}
    stages{
        stage('TF Init'){
            steps{
                sh '''
                cd /data/projects/terraform/libvirt-cinit
                terraform init
                '''
            }
        }
        stage('TF Plan'){
            steps{
                sh '''
                cd /data/projects/terraform/libvirt-cinit
                terraform plan -out deploy
                '''
            }
        }

        stage('Approval') {
            steps {
                script {
                    def userInput = input(id: 'Deploy', message: 'Do you want to deploy?', parameters: [ [$class: 'BooleanParameterDefinition', defaultValue: false, description: 'Apply terraform', name: 'confirm'] ])
                }
           }
        }
        stage('TF Apply'){
            steps{
                sh '''
                cd /data/projects/terraform/libvirt-cinit
                terraform apply deploy
                '''
            }
        }
    }
}

As you may see, we successfully set IP configuration of the virtual-guest.

KVM provisioning with Jenkins and Terraform

In this CI/CD activities we are going to provisioning a virtual guest on KVM host using Terraform and Jenkins.

Officially, Terraform does not have provider for KVM. But we are going to use third party provider for this.

I assume that host that runs terraform able to connect to the KVM hypervisor over ssh without password authentication. To do so, We already configured KVM hypervisor to logged in with SSH private key.

terraformm-provider-libvirt is a compiled binary, which needs to be put into the <PROJECT FOLDER>/terraform.d/plugins/linux_amd64 folder. You can find the compiled releases here or you can compile by yourself from the source. We are going to create following folder structure for Terraform.

libvirt/
├── libvirt.tf
├── terraform.d
│   └── plugins
│   └── linux_amd64
│   └── terraform-provider-libvirt

#libvirt.tf
provider "libvirt" {
    uri = "qemu+ssh://tesla@oregon.anatolia.io/system"
}

resource "libvirt_volume" "centos7-test" {
  name = "centos7-test"
  format = "qcow2"
  pool = "KVMGuests"
  source =  "http://oregon.anatolia.io/qcow2-repo/centos71.qcow2"

}

resource "libvirt_domain" "centos7-test" {
 autostart = "true"
  name   = "centos7-test"
  memory = "2048"
  vcpu   = 2
  running = "true"
  network_interface{
      hostname = "centos7-test"
      network_name = "default"
  }

 disk {
       volume_id = "${libvirt_volume.centos7-test.id}"
  }
console {
    type        = "pty"
    target_type = "virtio"
    target_port = "1"
  }

}

Creating a Pipeline in Jenkins

In this section Pipeline script has been created that needs to be defined in the Pipeline.

pipeline{
    agent {label 'master'}
    stages{
        stage('TF Init'){
            steps{
                sh '''
                cd /data/projects/terraform/libvirt
                terraform init
                '''
            }
        }
        stage('TF Plan'){
            steps{
                sh '''
                cd /data/projects/terraform/libvirt
                terraform plan -out createkvm
                '''
            }
        }

        stage('Approval') {
            steps {
                script {
                    def userInput = input(id: 'Approve', message: 'Do you want to Approve?', parameters: [ [$class: 'BooleanParameterDefinition', defaultValue: false, description: 'Apply terraform', name: 'confirm'] ])
                }
           }
        }
        stage('TF Apply'){
            steps{
                sh '''
                cd /data/projects/terraform/libvirt
                terraform apply createkvm
                '''
            }
        }
    }
}




You may realize that domain name is centos7-test but the hostname is centos71. this is because I used a one of the template that I already created before. Address of the template defined in the source section of libvirt.tf file. In the next post, I will integrate it with the cloud-init which allows machine to setup at first boot. By doing that even machine customization will be done automatically.