KVM provisioning with Jenkins and Terraform(Cloud-init)

In previous post, we provisioned the guest on KVM. It is just provisioning the guest from the template… There is no auto configuration of the guest such as hostname, name server, ip configuration that needs to be automated as well.

In order to automate configuration of above settings on boot time. we are going to use cloud-init.

What is cloud-init?

Cloud-init is the service that is installed inside the instance and cloud-config are a set of scripts that are executed as soon as the instance is started. Cloud-config is the language of the scripts that cloud-init knows to execute. Cloud-init runs on Linux workloads; for Microsoft Windows workloads, the equivalent is CloudBase-init which supports the majority of cloud-config parameters. The service on the operating system starts early at boot, retrieves metadata that has been provided from an external provider (metadata) or by direct user data supplied by the user. Reference:

cloud-init is run only the first time that the machine is booted. If cloud-init fails because of syntax errors in the file or doesn’t contain all of the needed directives, such as user credentials, a new instance must be created and launched. Restarting the failed instance with a new cloud-init file will not work. Reference

Lets start with creating a folder structure.

libvirt-cinit
├── cloud_init.cfg
├── libvirt-cinit.tf
├── network_config.cfg
├── terraform.d
│   └── plugins
│   └── linux_amd64
│   └── terraform-provider-libvirt

#libvirt-cinit.tf

provider "libvirt" {
    uri = "qemu+ssh://tesla@oregon.anatolia.io/system"
}

resource "libvirt_volume" "centos71-test" {
  name = "centos71-test"
  format = "qcow2"
  pool = "KVMGuests"
  #qcow2 will be cloud-init compatible qcow2 disk image
  #https://cloud.centos.org/centos/7/images/
  source =  "http://oregon.anatolia.io/qcow2-repo/centos71-cloud.qcow2"
}


data "template_file" "user_data" {
 template = file("${path.module}/cloud_init.cfg")
}

data "template_file" "meta_data" {
 template = file("${path.module}/network_config.cfg")
}

resource "libvirt_cloudinit_disk" "cloudinit" {
  name           = "cloudinit.iso"  
  user_data      = data.template_file.user_data.rendered
  meta_data      = data.template_file.meta_data.rendered
  pool           = "KVMGuests"
}


resource "libvirt_domain" "centos71-test" {
 autostart = "true"
  name   = "centos71-test"
  memory = "2048"
  vcpu   = 2
  running = "true"
  cloudinit = libvirt_cloudinit_disk.cloudinit.id

  network_interface {
      network_name = "default"
  }

 disk {
       #scsi = "true"
       volume_id = libvirt_volume.centos71-test.id
  }
console {
    type        = "pty"
    target_type = "virtio"
    target_port = "1"
  }

}

cloud_init.cfg

it is a yaml file (AKA user-data) . Which contains all guest related settings. Such as hostname, users, ssh-keys etc. this file injected into the cloudinit ISO

it MUST be start with the section #cloud-config.

#cloud-config
hostname: centos71-test
fqdn: centos71-test.anatolia.io
manage_etc_hosts: True
users:
  - name: gokay
    sudo: ALL=(ALL) NOPASSWD:ALL
    gecos: Gokay IT user
    lock_passwd: False
    ssh_pwauth: True
    chpasswd: { expire: False }
    passwd: $6$kOh5qSjBzbBfw9Rz$Y5bxvHvA637lSmancdrc17072tdVTpuk8hJ9CX4GV8pvvZXQ/Bv3y8ljY9KjJtLPg6hsyrqe4OHdvAlzFKae/0 
    shell: /bin/bash
  - name: eda
    gecos: Eda SEC user
    lock_passwd: False
    ssh_pwauth: True
    chpasswd: { expire: False }
    passwd: $6$llELUDypCmO.oC9Q$QjykXeZQxJ7hxndJaIQMvxewG3Mb05zHn5NlA8Nf17wd5FXLr6W3e3V.bhHVNmVL3nBGirGPy66FrEV88cI2Q0 
    shell: /bin/bash
runcmd:
  - ifdown eth0
  - sleep 1
  - ifup eth0
#passwords
#gokay:gokay123i
#eda:edaa

network_config.cfg(Static IP)

it is a another configuration for the network configuration. For Red Hat GNU/Linux netplan is not the service. Because of that we are going to user the format below. One thing this is very important according to the Red Hat that configuration must be put into the meta-data. Because of the bug in the cloud-init.

#for virtio nic ethX
#for en100X nic ensX
network-interfaces: |
  auto eth0
  iface eth0 inet static
  address 192.168.122.102
  network 192.168.122.0
  netmask 255.255.255.0
  broadcast 192.168.122.255
  gateway 192.168.122.1
  dns-nameservers 192.168.122.1 8.8.8.8
#bootcmd:
#  - ifdown eth0
#  - ifup eth0

How do I set up a static networking configuration?

Add a network-interfaces section to the meta-data file. This section contains the usual set of networking configuration options.

Because of a current bug in cloud-init, static networking configurations are not automatically started. Instead the default DHCP configuration remains active. A suggested work around is to manually stop and restart the network interface via the bootcmd directive. Reference

As we are using Terraform, it will generate the iso media by injecting user-data and meta-data. if you need to do it manually you can also use the command below.

sudo cloud-localds -v config.iso config.yaml network.cfg

Above sample command creates a config.iso by injecting config.yaml and network.cfg.

You need to install cloud-utils package in order to use cloud-localds

Experiments:

Jenkinsfile configured to get from the git repository. It is the same as previous Jenkinsfile. I could not figure out how to pull the binary file on git. Because of that, I used fixed path for the Terraform provider which is binary file.

pipeline{
    agent {label 'master'}
    stages{
        stage('TF Init'){
            steps{
                sh '''
                cd /data/projects/terraform/libvirt-cinit
                terraform init
                '''
            }
        }
        stage('TF Plan'){
            steps{
                sh '''
                cd /data/projects/terraform/libvirt-cinit
                terraform plan -out deploy
                '''
            }
        }

        stage('Approval') {
            steps {
                script {
                    def userInput = input(id: 'Deploy', message: 'Do you want to deploy?', parameters: [ [$class: 'BooleanParameterDefinition', defaultValue: false, description: 'Apply terraform', name: 'confirm'] ])
                }
           }
        }
        stage('TF Apply'){
            steps{
                sh '''
                cd /data/projects/terraform/libvirt-cinit
                terraform apply deploy
                '''
            }
        }
    }
}

As you may see, we successfully set IP configuration of the virtual-guest.

KVM provisioning with Jenkins and Terraform

In this CI/CD activities we are going to provisioning a virtual guest on KVM host using Terraform and Jenkins.

Officially, Terraform does not have provider for KVM. But we are going to use third party provider for this.

I assume that host that runs terraform able to connect to the KVM hypervisor over ssh without password authentication. To do so, We already configured KVM hypervisor to logged in with SSH private key.

terraformm-provider-libvirt is a compiled binary, which needs to be put into the <PROJECT FOLDER>/terraform.d/plugins/linux_amd64 folder. You can find the compiled releases here or you can compile by yourself from the source. We are going to create following folder structure for Terraform.

libvirt/
├── libvirt.tf
├── terraform.d
│   └── plugins
│   └── linux_amd64
│   └── terraform-provider-libvirt

#libvirt.tf
provider "libvirt" {
    uri = "qemu+ssh://tesla@oregon.anatolia.io/system"
}

resource "libvirt_volume" "centos7-test" {
  name = "centos7-test"
  format = "qcow2"
  pool = "KVMGuests"
  source =  "http://oregon.anatolia.io/qcow2-repo/centos71.qcow2"

}

resource "libvirt_domain" "centos7-test" {
 autostart = "true"
  name   = "centos7-test"
  memory = "2048"
  vcpu   = 2
  running = "true"
  network_interface{
      hostname = "centos7-test"
      network_name = "default"
  }

 disk {
       volume_id = "${libvirt_volume.centos7-test.id}"
  }
console {
    type        = "pty"
    target_type = "virtio"
    target_port = "1"
  }

}

Creating a Pipeline in Jenkins

In this section Pipeline script has been created that needs to be defined in the Pipeline.

pipeline{
    agent {label 'master'}
    stages{
        stage('TF Init'){
            steps{
                sh '''
                cd /data/projects/terraform/libvirt
                terraform init
                '''
            }
        }
        stage('TF Plan'){
            steps{
                sh '''
                cd /data/projects/terraform/libvirt
                terraform plan -out createkvm
                '''
            }
        }

        stage('Approval') {
            steps {
                script {
                    def userInput = input(id: 'Approve', message: 'Do you want to Approve?', parameters: [ [$class: 'BooleanParameterDefinition', defaultValue: false, description: 'Apply terraform', name: 'confirm'] ])
                }
           }
        }
        stage('TF Apply'){
            steps{
                sh '''
                cd /data/projects/terraform/libvirt
                terraform apply createkvm
                '''
            }
        }
    }
}




You may realize that domain name is centos7-test but the hostname is centos71. this is because I used a one of the template that I already created before. Address of the template defined in the source section of libvirt.tf file. In the next post, I will integrate it with the cloud-init which allows machine to setup at first boot. By doing that even machine customization will be done automatically.

Creating VLANs on KVM with OpenVswitch

VLAN is a crucial L2 network technology for increasing broadcast domain at the end it gives you better network utilization and security. If you are familiar with vmWare technology you can create a port group on a dVS or Standard switch. But If you need to segregate your network on KVM hypervisor, you need some other packages . In this tutorial I will show you how to create VLANs by using openvswitch and integrating it to KVM.

For this post, I assume that you already had openvswitch installed on your system. If not, follow here. I am also assuming that you have a physical NIC to bridge it to your virtual bridge(switch) which is created via openvswitch. By doing that you can connect to the outside world.

tesla@ankara:~$ sudo ovs-vsctl -V
ovs-vsctl (Open vSwitch) 2.12.0
DB Schema 8.0.0

Creating a Virtual Bridge with Openvswitch

$ sudo ovs-vsctl add-br OVS0 

Adding Physcical NIC to OVS0 Bridge

sudo ovs-vsctl add-port OVS0 enp0s31f6

In order to integrate the bridge which is created by openvswitch to KVM, we need to create XML configuration file which needed to be defined on KVM. You can see my configuration below.

<network>
 <name>OVS0</name>
 <forward mode='bridge'/>
 <bridge name='OVS0'/>
 <virtualport type='openvswitch'/>
 <portgroup name='VLAN10'>
   <vlan>
     <tag id='10'/>
   </vlan>
 </portgroup>
 <portgroup name='VLAN20'>
   <vlan>
     <tag id='20'/>
   </vlan>
 </portgroup>
 <portgroup name='VLAN30'>
   <vlan>
     <tag id='30'/>
   </vlan>
 </portgroup>
  <portgroup name='VLAN40'>
   <vlan>
     <tag id='40'/>
   </vlan>
 </portgroup>
<portgroup name='VLAN99'>
   <vlan>
     <tag id='99'/>
   </vlan>
 </portgroup>
 <portgroup name='VLAN100'>
   <vlan>
     <tag id='100'/>
   </vlan>
 </portgroup>
<portgroup name='TRUNK'>
   <vlan trunk='yes'>
     <tag id='10'/>
     <tag id='20'/>
     <tag id='30'/>
     <tag id='40'/>
     <tag id='99'/>
     <tag id='100'/>
   </vlan>
 </portgroup>
</network>

As per XML configuration above, we are creating a VLAN ID: 10, 20, 30, 40, 99 and 100.

Defining the configuration with virsh

virsh # net-define --file OVS0.xml 
Network OVS0 defined from OVS0.xml
virsh # net-autostart --network OVS0
Network OVS0 marked as autostarted
virsh # net-list 
 Name      State    Autostart   Persistent
--------------------------------------------
 default   active   yes         yes
 OVS0      active   yes         yes

After defining it, you will see that your XML file modified by KVM with uuid.

<!--
WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE
OVERWRITTEN AND LOST. Changes to this xml configuration should be made using:
  virsh net-edit OVS0
or other application using the libvirt API.
-->

<network>
  <name>OVS0</name>
  <uuid>a38bdd43-7fba-4e23-98f1-8c0ab83cff2c</uuid>
  <forward mode='bridge'/>
  <bridge name='OVS0'/>
  <virtualport type='openvswitch'/>
  <portgroup name='VLAN10'>
    <vlan>
      <tag id='10'/>
    </vlan>
  </portgroup>
  <portgroup name='VLAN20'>
    <vlan>
      <tag id='20'/>
    </vlan>
  </portgroup>
  <portgroup name='VLAN30'>
    <vlan>
      <tag id='30'/>
    </vlan>
  </portgroup>
  <portgroup name='VLAN40'>
    <vlan>
      <tag id='40'/>
    </vlan>
  </portgroup>
  <portgroup name='VLAN99'>
    <vlan>
      <tag id='99'/>
    </vlan>
  </portgroup>
  <portgroup name='VLAN100'>
    <vlan>
      <tag id='100'/>
    </vlan>
  </portgroup>
  <portgroup name='TRUNK'>
    <vlan trunk='yes'>
      <tag id='10'/>
      <tag id='20'/>
      <tag id='30'/>
      <tag id='40'/>
      <tag id='99'/>
      <tag id='100'/>
    </vlan>
  </portgroup>
</network>

Experiments

Let’s check on virt-manager if we are able to see the port groups.

Capturing Packages with Wireshark on Pyhiscal NIC that connected to th e OVS0

Compiling Archer T600U Plus WiFi USB Adapter on GNU/Linux with dkms

In this very short post, I am going to show you how to compile TPLINK USB Adapter module on GNU/Linux with dkms. I am using Pop OS 19.10.

$ sudo apt install git dkms
$ git clone https://github.com/aircrack-ng/rtl8812au.git
$ cd rtl8812au
$ sudo ./dkms-install.sh

If everything goes well you should get an output similar to below.

About to run dkms install steps...

Creating symlink /var/lib/dkms/rtl8812au/5.6.4.2/source ->
                 /usr/src/rtl8812au-5.6.4.2

DKMS: add completed.

Kernel preparation unnecessary for this kernel.  Skipping...

Building module:
cleaning build area...
'make' -j8 KVER=5.3.0-20-generic KSRC=/lib/modules/5.3.0-20-generic/build........
cleaning build area...

DKMS: build completed.

88XXau.ko:
Running module version sanity check.
 - Original module
   - No original module exists within this kernel
 - Installation
   - Installing to /lib/modules/5.3.0-20-generic/updates/dkms/

depmod....

DKMS: install completed.
Finished running dkms install steps.
$ ip link show
...
3: wlx34e894b147cc: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 2312 qdisc mq state UP mode DORMANT group default qlen 1000