Run VMware PowerCLI on your Docker Container

If you have a vSphere environment in your Infrastructure and want to automate provisioning steps, you are most likely need PowerCLI which mostly requires Windows Guest or supported GNU/Linux guest on your environment. By the way, Terraform also very nice tool to provision new virtual machine, but it is very limited.

If you stumble upon the same case above, this post for you. Actually you do not need any GNU/Linux or Window guest but docker. If you have a docker environment you can pull vmware/powerclicore image, run it in your docker environment and connect your vcenter server in that docker container.

I used it to set boot priority of virtual guests. One of the good thing with docker is that you can also pass your environment variables inside the container to use that variables.

docker run  --rm -it --entrypoint="/usr/bin/pwsh"  \
   -e VI_SERVER=${TF_VAR_vsphere_server} \
   -e VI_USERNAME=${TF_VAR_vsphere_user} \
   -v ${pwd}/scripts:/tmp/scripts vmware/powerclicore /tmp/scripts/setboot-priority.ps1

What above shell script does that, it instantiates docker image vmware/powerclicore mounts <pwd>/scripts/ folder(where config.txt and setboot-priority.ps1 lives) on the host to /tmp/scripts in the container and run setboot-priority.ps1 in the container.

You can get setboot-priority.ps1 script from my github repo.

Creating a Image for MaaS with Packer

In this post we are going to build an image with Packer which will be used to deploy via MaaS. After image built and uploaded to MaaS, it can be used to provision virtual machine or deploy OS on Bare-Metal machines. In order to build an image that is deployable with MaaS, we need couple of files which you can clone here.

For this post, It will be created minimal CentOS7 image including httpd package to test. One of the cool thing is with Packer that you can also run execute your Ansible playbook inside the machine being provisioned by Packer. Ansible(remote) provisioner is used to configure ntp server and install httpd.

As Qemu used as a builder, qemu-system-x64 has to be installed on the host where Packer runs. Packer will create an qcow2 image after successful image creation. But we are going to use tar.gz image file as we deploy image via MaaS.

centos7.json (do not forget to change iso_url in accordance with your environment.)

{
    "builders": [
        {
            "type": "qemu",
	    "iso_url": "/home/tesla/packer/centos7/isos/CentOS-7-x86_64-NetInstall-2003.iso",
            "iso_checksum_type": "sha256",
	    "iso_checksum": "101bc813d2af9ccf534d112cbe8670e6d900425b297d1a4d2529c5ad5f226372",
            "boot_command": [
                "<tab> ",
                "inst.ks=http://{{ .HTTPIP }}:{{ .HTTPPort }}/centos7.ks ",
                "<enter>"
            ],
	    "ssh_username": "tesla",
	    "ssh_password": "tesla",
	    "ssh_wait_timeout": "12000s",
            "boot_wait": "3s",
            "disk_size": "4G",
	    "display": "none",
            "headless": false,
            "memory": 4096,
	    "accelerator": "kvm",
	    "cpus": 4,
            "http_directory": "http",
            "shutdown_timeout": "20m",
	    "disk_interface": "virtio",
            "format": "qcow2",
            "net_device": "virtio-net"

        }
    ],

"post-processors": [
        {
            "type": "shell-local",
            "inline_shebang": "/bin/bash -e",
            "inline": [
                "TMP_DIR=$(mktemp -d /tmp/packer-maas-XXXX)",
                "echo 'Mounting image...'",
                "modprobe nbd",
                "qemu-nbd -d /dev/nbd4",
                "qemu-nbd -c /dev/nbd4 -n output-qemu/packer-qemu",
                "echo 'Waiting for partitions to be created...'",
                "tries=0",
                "while [ ! -e /dev/nbd4p1 -a $tries -lt 60 ]; do",
                "    sleep 1",
                "    tries=$((tries+1))",
                "done",
                "echo 'Tarring up image...'",
                "mount /dev/nbd4p1 $TMP_DIR",
                "tar -Sczpf centos7.tar.gz --selinux -C $TMP_DIR .",
                "echo 'Unmounting image...'",
                "umount $TMP_DIR",
                "qemu-nbd -d /dev/nbd4",
                "rmdir $TMP_DIR"
            ]
        }
    ],
    "provisioners": [
    {
  "type": "shell",
  "pause_before": "5s",
  "inline": [
	"sudo yum -y install epel-release",
	"sudo yum -y update",
	"sudo yum -y remove cloud-init",
	"sudo yum -y install python-jsonschema python-devel",
	"sudo yum -y install cloud-init --disablerepo=* --enablerepo=group_cloud-init-el-stable",
	"sudo yum -y install qemu-guest-agent wget"
  ]
},
    {
      "user": "tesla",
      "type": "ansible",
      "playbook_file": "./ansible/main.yml"
    },
    {
     "type": "shell",
     "inline": [
     
	"sudo systemctl enable cloud-init",
	"sudo rm -rf /var/lib/cloud/",
	"sudo rm -rf /etc/cloud/cloud-init.disabled",
	"sudo /usr/bin/truncate -s 0 /etc/fstab",
	"sudo /usr/bin/truncate -s 0 /etc/resolv.conf",
        "sudo rm -f /etc/sysconfig/network-scripts/ifcfg-[^lo]*",
        "sudo sync"
  ]
}
    ]
}
#main.yml
---
- hosts: default
  become: yes
  roles:
    - configure_httpd
    - configure_chrony

centos7.ks

url --mirrorlist="http://mirrorlist.centos.org/?release=7&arch=x86_64&repo=os"
text
reboot
firewall --enabled --service=ssh,http
firstboot --disable
ignoredisk --only-use=vda
lang en_US.UTF-8
keyboard us
network --bootproto=dhcp --hostname=packer-maas.manintheit.org
selinux --enforcing
timezone UTC --isUtc
bootloader --location=mbr --driveorder="vda" --timeout=1
rootpw --plaintext root1234
user --name=tesla --groups=wheel --plaintext --password=tesla


repo --name="Base" --mirrorlist="http://mirrorlist.centos.org/?release=7&arch=x86_64&repo=os"
repo --name="Updates" --mirrorlist="http://mirrorlist.centos.org/?release=7&arch=x86_64&repo=updates"
repo --name="Extras" --mirrorlist="http://mirrorlist.centos.org/?release=7&arch=x86_64&repo=extras"
repo --name="cloud-init" --baseurl="http://copr-be.cloud.fedoraproject.org/results/@cloud-init/el-stable/epel-7-x86_64"

zerombr
clearpart --all --initlabel
part / --size=1 --grow --asprimary --fstype=ext4


%packages
@core
#httpd
bash-completion
cloud-init-el-release
cloud-init
# cloud-init only requires python-oauthlib with MAAS. As such upstream
# has removed python-oauthlib from cloud-init's deps.
python2-oauthlib
cloud-utils-growpart
rsync
tar
yum-utils
# bridge-utils is required by cloud-init to configure networking. Without it
# installed cloud-init will try to install it itself which will not work in
# isolated environments.
bridge-utils
# Tools needed to allow custom storage to be deployed without acessing the
# Internet.
grub2-efi-x64
shim-x64
# Older versions of Curtin do not support secure boot and setup grub by
# generating grubx64.efi with grub2-efi-x64-modules.
grub2-efi-x64-modules
efibootmgr
dosfstools
lvm2
mdadm
device-mapper-multipath
iscsi-initiator-utils
-plymouth
# Remove ALSA firmware
-a*-firmware
# Remove Intel wireless firmware
-i*-firmware
%end


%post --erroronfail
systemctl disable cloud-init
touch /etc/cloud/cloud-init.disabled
yum install -y sudo
echo "tesla        ALL=(ALL)       NOPASSWD: ALL" >> /etc/sudoers.d/tesla
sed -i "s/^.*requiretty/#Defaults requiretty/" /etc/sudoers
yum clean all
%end

Validating Packer config.

sudo PACKER_LOG=1 packer validate centos7.json

After successful validation, we can start building machine image.

sudo PACKER_LOG=1 packer build centos7.json

Once packer finished successfully, centos7.tar.gz file should be created, which will be uploaded to MaaS to provision VMs or Install OS in Bare-Metal servers.

Uploading image to MaaS.

maas $PROFILE boot-resources create name="centos/centos7Packer1" architecture=amd64/generic content@=centos7.tar.gz

Important Takeaways on Packer.

Do not forget to issue sync command inside machine provisioned by Packer. Otherwise, you will experience with the empty systemd unit files. So, sync command will flush data from cache to disk.

As MaaS uses cloud-init to setup various host settings such as partitioning disk, dns server, network configuration etc. Some of the files are removed or truncated prior to image creation. Because MaaS will populate these configurations during the first boot of the system. If you deploy the OS via MaaS, configure dns setting on the MaaS, it will be anyway overwritten by cloud-init even, you configure resolv.conf inside the image.

Provisioning a VM on KVM via Kickstart using virt-install

virt-install is a command line tool for creating new KVM , Xen, or Linux container guests using the “libvirt” hypervisor management library. It is one of the quickest way to deploy a vm from the command line. In this post I will also show you to install CentOS on KVM via kickstart. In this installation instead of choosing native GNU/Linux bridge we are using Open Vswitch.

For your environment, I am assuming that you already configured your dhcp, dns and http server environment for the pxeboot. I am using cobbler for dhcp server management. I am provisioning CentOS machines for installation of Kubernetes Cluster nodes. As I use remote KVM host, user tesla has to able to connect with SSH key authentication and the user tesla has to be in the group libvirt.

Provisioning script.

virt-install \
--connect qemu+ssh://tesla@192.168.122.1/system \
--name k8s-master \
--ram 2048 \
--disk bus=virtio,pool=KVMGuests,size=15,format=qcow2 \
--network network=OVS0,model=virtio,virtualport_type=openvswitch,portgroup=VLAN100 \
--vcpus 2 \
--os-type linux \
--location http://cobbler.manintheit.org/cblr/links/CentOS7-x86_64 \
--os-variant rhel7 \
--extra-args="ks=http://10.5.100.253/k8s/k8s-master-centos7.ks ksdevice=eth0 ip=10.5.100.15 netmask=255.255.255.0 dns=10.5.100.253 gateway=10.5.100.254" 
--location Distribution tree installation source. virt-install can recognize certain distribution trees and fetches a bootable kernel/initrd pair to launch the install.

k8s-master-centos7.ks

install
text
eula --agreed
url --url=http://10.5.100.253/cblr/links/CentOS7-x86_64/
lang en_US.UTF-8
keyboard us
network --onboot=on --bootproto=static  --ip 10.5.100.15 --netmask 255.255.255.0 --gateway 10.5.100.254 --nameserver 10.5.100.253 --device=eth0 --hostname k8s-master.manintheit.org
rootpw root
firewall --disabled
selinux --permissive
timezone Europe/Berlin
skipx
zerombr
clearpart --all --initlabel
part /boot --fstype ext4 --size=512
part /     --fstype ext4 --size=1 --grow
authconfig --enableshadow --passalgo=sha512
services --enabled=NetworkManager,sshd
reboot
user --name=tesla --plaintext --password tesla --groups=tesla,wheel

#repo --name=base --baseurl=http://mirror.centos.org/centos/7.3.1611/os/x86_64/
#repo --name=epel-release --baseurl=http://anorien.csc.warwick.ac.uk/mirrors/epel/7/x86_64/
#repo --name=elrepo-kernel --baseurl=http://elrepo.org/linux/kernel/el7/x86_64/
#repo --name=elrepo-release --baseurl=http://elrepo.org/linux/elrepo/el7/x86_64/
#repo --name=elrepo-extras --baseurl=http://elrepo.org/linux/extras/el7/x86_64/

%packages --ignoremissing --excludedocs
@Base
%end

%post
yum update -y
yum install -y sudo
echo "tesla        ALL=(ALL)       NOPASSWD: ALL" >> /etc/sudoers.d/tesla
sed -i "s/^.*requiretty/#Defaults requiretty/" /etc/sudoers
/bin/echo 'UseDNS no' >> /etc/ssh/sshd_config
yum clean all
/bin/sh -c 'echo "br_netfilter" > /etc/modules-load.d/br_netfilter.conf'
modprobe br_netfilter
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

#Enable kubernetes repo
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
%end

MaaS mac filtering

Are you using MaaS for bare-metal server or VM deployments? And you need some mac filtering that you want MaaS to ignore dhcp discovery packages which comes from particular MAC address(es) ? If so, you can simply add similar snippet of isc-dhcp configuration to your MaaS. (Settings > Dhcp Snippets) set the scope as Global.

class "black-list" 
{    
	match substring (hardware, 1, 6);      
	ignore booting;
}
subclass "black-list" 00:10:9b:8f:31:71;
subclass "black-list" 00:10:9b:8f:31:78;

After adding, enabling and saving it. MaaS will not send any PXEboot reply to system that has the MAC addresses above.

Happy Deployment 🙂

KVM provisioning with Jenkins and Terraform(Cloud-init)

In previous post, we provisioned the guest on KVM. It is just provisioning the guest from the template… There is no auto configuration of the guest such as hostname, name server, ip configuration that needs to be automated as well.

In order to automate configuration of above settings on boot time. we are going to use cloud-init.

What is cloud-init?

Cloud-init is the service that is installed inside the instance and cloud-config are a set of scripts that are executed as soon as the instance is started. Cloud-config is the language of the scripts that cloud-init knows to execute. Cloud-init runs on Linux workloads; for Microsoft Windows workloads, the equivalent is CloudBase-init which supports the majority of cloud-config parameters. The service on the operating system starts early at boot, retrieves metadata that has been provided from an external provider (metadata) or by direct user data supplied by the user. Reference:

cloud-init is run only the first time that the machine is booted. If cloud-init fails because of syntax errors in the file or doesn’t contain all of the needed directives, such as user credentials, a new instance must be created and launched. Restarting the failed instance with a new cloud-init file will not work. Reference

Lets start with creating a folder structure.

libvirt-cinit
├── cloud_init.cfg
├── libvirt-cinit.tf
├── network_config.cfg
├── terraform.d
│   └── plugins
│   └── linux_amd64
│   └── terraform-provider-libvirt

#libvirt-cinit.tf

provider "libvirt" {
    uri = "qemu+ssh://tesla@oregon.anatolia.io/system"
}

resource "libvirt_volume" "centos71-test" {
  name = "centos71-test"
  format = "qcow2"
  pool = "KVMGuests"
  #qcow2 will be cloud-init compatible qcow2 disk image
  #https://cloud.centos.org/centos/7/images/
  source =  "http://oregon.anatolia.io/qcow2-repo/centos71-cloud.qcow2"
}


data "template_file" "user_data" {
 template = file("${path.module}/cloud_init.cfg")
}

data "template_file" "meta_data" {
 template = file("${path.module}/network_config.cfg")
}

resource "libvirt_cloudinit_disk" "cloudinit" {
  name           = "cloudinit.iso"  
  user_data      = data.template_file.user_data.rendered
  meta_data      = data.template_file.meta_data.rendered
  pool           = "KVMGuests"
}


resource "libvirt_domain" "centos71-test" {
 autostart = "true"
  name   = "centos71-test"
  memory = "2048"
  vcpu   = 2
  running = "true"
  cloudinit = libvirt_cloudinit_disk.cloudinit.id

  network_interface {
      network_name = "default"
  }

 disk {
       #scsi = "true"
       volume_id = libvirt_volume.centos71-test.id
  }
console {
    type        = "pty"
    target_type = "virtio"
    target_port = "1"
  }

}

cloud_init.cfg

it is a yaml file (AKA user-data) . Which contains all guest related settings. Such as hostname, users, ssh-keys etc. this file injected into the cloudinit ISO

it MUST be start with the section #cloud-config.

#cloud-config
hostname: centos71-test
fqdn: centos71-test.anatolia.io
manage_etc_hosts: True
users:
  - name: gokay
    sudo: ALL=(ALL) NOPASSWD:ALL
    gecos: Gokay IT user
    lock_passwd: False
    ssh_pwauth: True
    chpasswd: { expire: False }
    passwd: $6$kOh5qSjBzbBfw9Rz$Y5bxvHvA637lSmancdrc17072tdVTpuk8hJ9CX4GV8pvvZXQ/Bv3y8ljY9KjJtLPg6hsyrqe4OHdvAlzFKae/0 
    shell: /bin/bash
  - name: eda
    gecos: Eda SEC user
    lock_passwd: False
    ssh_pwauth: True
    chpasswd: { expire: False }
    passwd: $6$llELUDypCmO.oC9Q$QjykXeZQxJ7hxndJaIQMvxewG3Mb05zHn5NlA8Nf17wd5FXLr6W3e3V.bhHVNmVL3nBGirGPy66FrEV88cI2Q0 
    shell: /bin/bash
runcmd:
  - ifdown eth0
  - sleep 1
  - ifup eth0
#passwords
#gokay:gokay123i
#eda:edaa

network_config.cfg(Static IP)

it is a another configuration for the network configuration. For Red Hat GNU/Linux netplan is not the service. Because of that we are going to user the format below. One thing this is very important according to the Red Hat that configuration must be put into the meta-data. Because of the bug in the cloud-init.

#for virtio nic ethX
#for en100X nic ensX
network-interfaces: |
  auto eth0
  iface eth0 inet static
  address 192.168.122.102
  network 192.168.122.0
  netmask 255.255.255.0
  broadcast 192.168.122.255
  gateway 192.168.122.1
  dns-nameservers 192.168.122.1 8.8.8.8
#bootcmd:
#  - ifdown eth0
#  - ifup eth0

How do I set up a static networking configuration?

Add a network-interfaces section to the meta-data file. This section contains the usual set of networking configuration options.

Because of a current bug in cloud-init, static networking configurations are not automatically started. Instead the default DHCP configuration remains active. A suggested work around is to manually stop and restart the network interface via the bootcmd directive. Reference

As we are using Terraform, it will generate the iso media by injecting user-data and meta-data. if you need to do it manually you can also use the command below.

sudo cloud-localds -v config.iso config.yaml network.cfg

Above sample command creates a config.iso by injecting config.yaml and network.cfg.

You need to install cloud-utils package in order to use cloud-localds

Experiments:

Jenkinsfile configured to get from the git repository. It is the same as previous Jenkinsfile. I could not figure out how to pull the binary file on git. Because of that, I used fixed path for the Terraform provider which is binary file.

pipeline{
    agent {label 'master'}
    stages{
        stage('TF Init'){
            steps{
                sh '''
                cd /data/projects/terraform/libvirt-cinit
                terraform init
                '''
            }
        }
        stage('TF Plan'){
            steps{
                sh '''
                cd /data/projects/terraform/libvirt-cinit
                terraform plan -out deploy
                '''
            }
        }

        stage('Approval') {
            steps {
                script {
                    def userInput = input(id: 'Deploy', message: 'Do you want to deploy?', parameters: [ [$class: 'BooleanParameterDefinition', defaultValue: false, description: 'Apply terraform', name: 'confirm'] ])
                }
           }
        }
        stage('TF Apply'){
            steps{
                sh '''
                cd /data/projects/terraform/libvirt-cinit
                terraform apply deploy
                '''
            }
        }
    }
}

As you may see, we successfully set IP configuration of the virtual-guest.

KVM provisioning with Jenkins and Terraform

In this CI/CD activities we are going to provisioning a virtual guest on KVM host using Terraform and Jenkins.

Officially, Terraform does not have provider for KVM. But we are going to use third party provider for this.

I assume that host that runs terraform able to connect to the KVM hypervisor over ssh without password authentication. To do so, We already configured KVM hypervisor to logged in with SSH private key.

terraformm-provider-libvirt is a compiled binary, which needs to be put into the <PROJECT FOLDER>/terraform.d/plugins/linux_amd64 folder. You can find the compiled releases here or you can compile by yourself from the source. We are going to create following folder structure for Terraform.

libvirt/
├── libvirt.tf
├── terraform.d
│   └── plugins
│   └── linux_amd64
│   └── terraform-provider-libvirt

#libvirt.tf
provider "libvirt" {
    uri = "qemu+ssh://tesla@oregon.anatolia.io/system"
}

resource "libvirt_volume" "centos7-test" {
  name = "centos7-test"
  format = "qcow2"
  pool = "KVMGuests"
  source =  "http://oregon.anatolia.io/qcow2-repo/centos71.qcow2"

}

resource "libvirt_domain" "centos7-test" {
 autostart = "true"
  name   = "centos7-test"
  memory = "2048"
  vcpu   = 2
  running = "true"
  network_interface{
      hostname = "centos7-test"
      network_name = "default"
  }

 disk {
       volume_id = "${libvirt_volume.centos7-test.id}"
  }
console {
    type        = "pty"
    target_type = "virtio"
    target_port = "1"
  }

}

Creating a Pipeline in Jenkins

In this section Pipeline script has been created that needs to be defined in the Pipeline.

pipeline{
    agent {label 'master'}
    stages{
        stage('TF Init'){
            steps{
                sh '''
                cd /data/projects/terraform/libvirt
                terraform init
                '''
            }
        }
        stage('TF Plan'){
            steps{
                sh '''
                cd /data/projects/terraform/libvirt
                terraform plan -out createkvm
                '''
            }
        }

        stage('Approval') {
            steps {
                script {
                    def userInput = input(id: 'Approve', message: 'Do you want to Approve?', parameters: [ [$class: 'BooleanParameterDefinition', defaultValue: false, description: 'Apply terraform', name: 'confirm'] ])
                }
           }
        }
        stage('TF Apply'){
            steps{
                sh '''
                cd /data/projects/terraform/libvirt
                terraform apply createkvm
                '''
            }
        }
    }
}




You may realize that domain name is centos7-test but the hostname is centos71. this is because I used a one of the template that I already created before. Address of the template defined in the source section of libvirt.tf file. In the next post, I will integrate it with the cloud-init which allows machine to setup at first boot. By doing that even machine customization will be done automatically.

Creating VLANs on KVM with OpenVswitch

VLAN is a crucial L2 network technology for increasing broadcast domain at the end it gives you better network utilization and security. If you are familiar with vmWare technology you can create a port group on a dVS or Standard switch. But If you need to segregate your network on KVM hypervisor, you need some other packages . In this tutorial I will show you how to create VLANs by using openvswitch and integrating it to KVM.

For this post, I assume that you already had openvswitch installed on your system. If not, follow here. I am also assuming that you have a physical NIC to bridge it to your virtual bridge(switch) which is created via openvswitch. By doing that you can connect to the outside world.

tesla@ankara:~$ sudo ovs-vsctl -V
ovs-vsctl (Open vSwitch) 2.12.0
DB Schema 8.0.0

Creating a Virtual Bridge with Openvswitch

$ sudo ovs-vsctl add-br OVS0 

Adding Physcical NIC to OVS0 Bridge

sudo ovs-vsctl add-port OVS0 enp0s31f6

In order to integrate the bridge which is created by openvswitch to KVM, we need to create XML configuration file which needed to be defined on KVM. You can see my configuration below.

<network>
 <name>OVS0</name>
 <forward mode='bridge'/>
 <bridge name='OVS0'/>
 <virtualport type='openvswitch'/>
 <portgroup name='VLAN10'>
   <vlan>
     <tag id='10'/>
   </vlan>
 </portgroup>
 <portgroup name='VLAN20'>
   <vlan>
     <tag id='20'/>
   </vlan>
 </portgroup>
 <portgroup name='VLAN30'>
   <vlan>
     <tag id='30'/>
   </vlan>
 </portgroup>
  <portgroup name='VLAN40'>
   <vlan>
     <tag id='40'/>
   </vlan>
 </portgroup>
<portgroup name='VLAN99'>
   <vlan>
     <tag id='99'/>
   </vlan>
 </portgroup>
 <portgroup name='VLAN100'>
   <vlan>
     <tag id='100'/>
   </vlan>
 </portgroup>
<portgroup name='TRUNK'>
   <vlan trunk='yes'>
     <tag id='10'/>
     <tag id='20'/>
     <tag id='30'/>
     <tag id='40'/>
     <tag id='99'/>
     <tag id='100'/>
   </vlan>
 </portgroup>
</network>

As per XML configuration above, we are creating a VLAN ID: 10, 20, 30, 40, 99 and 100.

Defining the configuration with virsh

virsh # net-define --file OVS0.xml 
Network OVS0 defined from OVS0.xml
virsh # net-autostart --network OVS0
Network OVS0 marked as autostarted
virsh # net-list 
 Name      State    Autostart   Persistent
--------------------------------------------
 default   active   yes         yes
 OVS0      active   yes         yes

After defining it, you will see that your XML file modified by KVM with uuid.

<!--
WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE
OVERWRITTEN AND LOST. Changes to this xml configuration should be made using:
  virsh net-edit OVS0
or other application using the libvirt API.
-->

<network>
  <name>OVS0</name>
  <uuid>a38bdd43-7fba-4e23-98f1-8c0ab83cff2c</uuid>
  <forward mode='bridge'/>
  <bridge name='OVS0'/>
  <virtualport type='openvswitch'/>
  <portgroup name='VLAN10'>
    <vlan>
      <tag id='10'/>
    </vlan>
  </portgroup>
  <portgroup name='VLAN20'>
    <vlan>
      <tag id='20'/>
    </vlan>
  </portgroup>
  <portgroup name='VLAN30'>
    <vlan>
      <tag id='30'/>
    </vlan>
  </portgroup>
  <portgroup name='VLAN40'>
    <vlan>
      <tag id='40'/>
    </vlan>
  </portgroup>
  <portgroup name='VLAN99'>
    <vlan>
      <tag id='99'/>
    </vlan>
  </portgroup>
  <portgroup name='VLAN100'>
    <vlan>
      <tag id='100'/>
    </vlan>
  </portgroup>
  <portgroup name='TRUNK'>
    <vlan trunk='yes'>
      <tag id='10'/>
      <tag id='20'/>
      <tag id='30'/>
      <tag id='40'/>
      <tag id='99'/>
      <tag id='100'/>
    </vlan>
  </portgroup>
</network>

Experiments

Let’s check on virt-manager if we are able to see the port groups.

Capturing Packages with Wireshark on Pyhiscal NIC that connected to th e OVS0

Compiling Archer T600U Plus WiFi USB Adapter on GNU/Linux with dkms

In this very short post, I am going to show you how to compile TPLINK USB Adapter module on GNU/Linux with dkms. I am using Pop OS 19.10.

$ sudo apt install git dkms
$ git clone https://github.com/aircrack-ng/rtl8812au.git
$ cd rtl8812au
$ sudo ./dkms-install.sh

If everything goes well you should get an output similar to below.

About to run dkms install steps...

Creating symlink /var/lib/dkms/rtl8812au/5.6.4.2/source ->
                 /usr/src/rtl8812au-5.6.4.2

DKMS: add completed.

Kernel preparation unnecessary for this kernel.  Skipping...

Building module:
cleaning build area...
'make' -j8 KVER=5.3.0-20-generic KSRC=/lib/modules/5.3.0-20-generic/build........
cleaning build area...

DKMS: build completed.

88XXau.ko:
Running module version sanity check.
 - Original module
   - No original module exists within this kernel
 - Installation
   - Installing to /lib/modules/5.3.0-20-generic/updates/dkms/

depmod....

DKMS: install completed.
Finished running dkms install steps.
$ ip link show
...
3: wlx34e894b147cc: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 2312 qdisc mq state UP mode DORMANT group default qlen 1000

Write Your own Custom Plugin on check_mk

The check_mk is a open source monitoring solution with hundreds of checks which enables you to monitor your IT infrastructure. Besides it allows you to configure most of your monitor related activities from the graphical interface called “WATO”. Most of the time it automatically discover the system, once it is added to the check_mk inventory. But Not always! Nevertheless, we are still able to monitor the system by writing custom plugin. In this post, I will share you how to write your own custom plugin on check_mk. As an experiment, I used my home router. which is Ubiquiti Edge Router X. I enabled the SNMP version 2 on my router.

Before writing a plugin we need to decide on what we are going to monitor and second thing figuring out the related SNMP oid. In this post, I am going to monitor the bandwidth usage of the interface eth0 on my router. To find the related oid’s you can use the snmpwalk tool. When you run the code with the correct parameters system give the all the information. If you do not download the necessary MIB file on your system you will see all the elements with its oid, which is hard to know.

#snmpwalk -v2c -c public 192.168.1.1 . | less
iso.3.6.1.2.1.1.1.0 = STRING: "EdgeOS v1.10.8.5142457.181120.1809"
iso.3.6.1.2.1.1.2.0 = OID: iso.3.6.1.4.1.41112.1.5
iso.3.6.1.2.1.1.3.0 = Timeticks: (11759) 0:01:57.59
iso.3.6.1.2.1.1.4.0 = STRING: "root"
iso.3.6.1.2.1.1.5.0 = STRING: "ubnt"
iso.3.6.1.2.1.1.6.0 = STRING: "Unknown"
iso.3.6.1.2.1.1.7.0 = INTEGER: 14
iso.3.6.1.2.1.1.8.0 = Timeticks: (18) 0:00:00.18
iso.3.6.1.2.1.1.9.1.2.1 = OID: iso.3.6.1.2.1.10.131
iso.3.6.1.2.1.1.9.1.2.2 = OID: iso.3.6.1.6.3.11.3.1.1
iso.3.6.1.2.1.1.9.1.2.3 = OID: iso.3.6.1.6.3.15.2.1.1
...(omitted)

If your output like the above, you may need to download MIB files. You can download the snmp-mibs-downloader on GNU/Linux. If you are lucky the snmpwalk give you all the information by translating to text notation which is more meaningful for us.

# apt-get install snmp-mibs-downloader
# download-mibs
# snmpwalk -v2c -c public 192.168.1.1 . | less

SNMPv2-MIB::sysDescr.0 = STRING: EdgeOS v1.10.8.5142457.181120.1809
SNMPv2-MIB::sysObjectID.0 = OID: SNMPv2-SMI::enterprises.41112.1.5
SNMPv2-MIB::sysUpTime.0 = Timeticks: (44730) 0:07:27.30
SNMPv2-MIB::sysContact.0 = STRING: root
SNMPv2-MIB::sysName.0 = STRING: ubnt
SNMPv2-MIB::sysLocation.0 = STRING: Unknown
SNMPv2-MIB::sysServices.0 = INTEGER: 14
SNMPv2-MIB::sysORLastChange.0 = Timeticks: (18) 0:00:00.18
SNMPv2-MIB::sysORID.1 = OID: SNMPv2-SMI::transmission.131
SNMPv2-MIB::sysORID.2 = OID: SNMPv2-SMI::snmpModules.11.3.1.1
SNMPv2-MIB::sysORID.3 = OID: SNMPv2-SMI::snmpModules.15.2.1.1
SNMPv2-MIB::sysORID.4 = OID: SNMPv2-SMI::snmpModules.10.3.1.1



As I am going to monitor the bandwidth usage of the the interface for eth0. I need to find the related oid number, which are below.

#Text Notation
IF-MIB::ifDescr.1 = STRING: lo
IF-MIB::ifDescr.2 = STRING: switch0
IF-MIB::ifDescr.3 = STRING: imq0
IF-MIB::ifDescr.4 = STRING: eth0
IF-MIB::ifDescr.5 = STRING: eth1
IF-MIB::ifDescr.6 = STRING: eth2
IF-MIB::ifDescr.7 = STRING: eth3
IF-MIB::ifDescr.8 = STRING: eth4
IF-MIB::ifDescr.9 = STRING: eth1.20
IF-MIB::ifDescr.10 = STRING: eth1.10


IF-MIB::ifInOctets.1 = Counter32: 38770
IF-MIB::ifInOctets.2 = Counter32: 0
IF-MIB::ifInOctets.3 = Counter32: 0
IF-MIB::ifInOctets.4 = Counter32: 307201
IF-MIB::ifInOctets.5 = Counter32: 0
IF-MIB::ifInOctets.6 = Counter32: 0
IF-MIB::ifInOctets.7 = Counter32: 0
IF-MIB::ifInOctets.8 = Counter32: 0
IF-MIB::ifInOctets.9 = Counter32: 0
IF-MIB::ifInOctets.10 = Counter32: 0

IF-MIB::ifOutOctets.1 = Counter32: 38770
IF-MIB::ifOutOctets.2 = Counter32: 424
IF-MIB::ifOutOctets.3 = Counter32: 0
IF-MIB::ifOutOctets.4 = Counter32: 295279
IF-MIB::ifOutOctets.5 = Counter32: 0
IF-MIB::ifOutOctets.6 = Counter32: 0
IF-MIB::ifOutOctets.7 = Counter32: 0
IF-MIB::ifOutOctets.8 = Counter32: 0
IF-MIB::ifOutOctets.9 = Counter32: 0
IF-MIB::ifOutOctets.10 = Counter32: 0

#OID Notation

.1.3.6.1.2.1.2.2.1.2.1 = STRING: lo
.1.3.6.1.2.1.2.2.1.2.2 = STRING: switch0
.1.3.6.1.2.1.2.2.1.2.3 = STRING: imq0
.1.3.6.1.2.1.2.2.1.2.4 = STRING: eth0
.1.3.6.1.2.1.2.2.1.2.5 = STRING: eth1
.1.3.6.1.2.1.2.2.1.2.6 = STRING: eth2
.1.3.6.1.2.1.2.2.1.2.7 = STRING: eth3
.1.3.6.1.2.1.2.2.1.2.8 = STRING: eth4
.1.3.6.1.2.1.2.2.1.2.9 = STRING: eth1.20
.1.3.6.1.2.1.2.2.1.2.10 = STRING: eth1.10

.1.3.6.1.2.1.2.2.1.10.1 = Counter32: 55901
.1.3.6.1.2.1.2.2.1.10.2 = Counter32: 0
.1.3.6.1.2.1.2.2.1.10.3 = Counter32: 0
.1.3.6.1.2.1.2.2.1.10.4 = Counter32: 967790
.1.3.6.1.2.1.2.2.1.10.5 = Counter32: 0
.1.3.6.1.2.1.2.2.1.10.6 = Counter32: 0
.1.3.6.1.2.1.2.2.1.10.7 = Counter32: 0
.1.3.6.1.2.1.2.2.1.10.8 = Counter32: 0
.1.3.6.1.2.1.2.2.1.10.9 = Counter32: 0
.1.3.6.1.2.1.2.2.1.10.10 = Counter32: 0

.1.3.6.1.2.1.2.2.1.16.1 = Counter32: 55901
.1.3.6.1.2.1.2.2.1.16.2 = Counter32: 424
.1.3.6.1.2.1.2.2.1.16.3 = Counter32: 0
.1.3.6.1.2.1.2.2.1.16.4 = Counter32: 1031485
.1.3.6.1.2.1.2.2.1.16.5 = Counter32: 0
.1.3.6.1.2.1.2.2.1.16.6 = Counter32: 0
.1.3.6.1.2.1.2.2.1.16.7 = Counter32: 0
.1.3.6.1.2.1.2.2.1.16.8 = Counter32: 0
.1.3.6.1.2.1.2.2.1.16.9 = Counter32: 0
.1.3.6.1.2.1.2.2.1.16.10 = Counter32: 0


As it is depicted above, we need a interface name, IfInOctets and IfOutOctets values of the Interface to monitor the bandwidth usage.

Math!

We also need some basic math to abstract the the data. Because SNMP gives us the bandwidth usage values counter based octets. So, we need to know how to abstract it.

Formula:

   (Counter2 - Counter1)
============================ * 8  Gives the bps.(bit-per-second)
       (Time2 - Time1)

If you still do not understand you can chek the page. It is a piece of cake! 🙂

Actually check_mk has a very nice function that keeps track of the time and counter values and takes the delta automatically . So we do not need to store anything. We only need to pass the correct values to the get_rate() function.

For custom plugin we need to put the custom plugin into the correct folder.

/omd/sites/<your site>/local/share/check_mk/checks

I created my plugin called “edge_router_bw”

#!/usr/bin/env python
edge_router_default_bw_values = (30.0, 35.0, 30.0, 35.0)
def inventory_edge_router_bw(info):
    for interface, inoctets, outoctets  in info:
        yield interface, "edge_router_default_bw_values"

def check_edge_router_bw(item, params, info):
    warntx, crittx, warnrx, critrx = params
    for interface, inoctets, outoctets  in info:
        if interface == item:
            this_time = time.time()
            if interface == "eth0":
                rx = 8.0 * get_rate("RX.%s" % interface, this_time, float(inoctets))
                tx = 8.0 * get_rate("TX.%s" % interface, this_time, float(outoctets))
                perfdata = [("RX", float(rx)/1024.0), ("TX", float(tx)/1024.0)]
                tx = float(tx)/1024.0
                rx = float(rx)/1024.0
                if rx >= critrx or tx >= crittx:
                    return 2, ("RX: %.2f Kbps, TX: %.2f Kbps" % (rx, tx)), perfdata
                elif rx >= warnrx or tx  >= warntx:
                    return 1, ("RX: %.2f Kbps, TX: %.2f Kbps" % (rx, tx)), perfdata
                else:
                    return 0, ("RX: %.2f Kbps, TX: %.2f Kbps" % (rx, tx)), perfdata
            
check_info["edge_router_bw"] = {
    "check_function"        : check_edge_router_bw,
    "inventory_function"    : inventory_edge_router_bw,
    "service_description"   : "Edge Router NICs bandwith %s",
    "snmp_info"             : ( ".1.3.6.1.2.1.2.2.1", [ "2", "10", "16"] ),
    "has_perfdata"          : True,
    "group"                 : "edge_router_bw",
}

One of of the nice feature of check_mk is that, you can set your threshold values for warning and critical levels. Then you can set/change these levels from the WATO. No more editing of the files. For that, you need to create below configuration file, which allows you to make changes from the WATO.

Create a file on the /omd/sites/local/share/check_mk/web/plugins/wato/.

I created a file called check_param_router_edge_bw.py

register_check_parameters(
        subgroup_networking,
        "edge_router_bw",
        _("Edge router Bandwith Kbps"),
        Tuple(
            title = _("Edge Router Interface Bandwith"),
            elements = [
                Float(title = _("Set WARNING if TX above Kbps"), minvalue = 0.0, maxvalue = 10000.0, default_value = 30.0),
                Float(title = _("Set CRITICAL if TX  above Kbps"), minvalue = 0.0, maxvalue = 10000.0, default_value = 35.0),
                Float(title = _("Set WARNING if RX above Kbps"), minvalue = 0.0, maxvalue = 10000.0, default_value = 30.0),
                Float(title = _("Set CRITICAL if RX above Kbps"), minvalue = 0.0, maxvalue = 10000.0, default_value = 35.0),
            ]),
            TextAscii(
                title = _("Inteface Bandwith Kbps"),
                allow_empty = False),
            "first"
)

Once everything has finished, we can check our plugin. Check the script with the –debug option if there is an error in the script.

OMD[monitoring]:~/local/share/check_mk/checks$ cmk --debug --checks=edge_router_bw -I edgerouter

If there is no any error we can inventory the host in the check _mk

OMD[monitoring]:~/local/share/check_mk/checks$ cmk -IIv edgerouter
Discovering services on edgerouter:
edgerouter:
   10 edge_router_bw
   10 edge_router_params
    1 hr_cpu
    4 hr_fs
    1 hr_mem
    2 if64
    1 snmp_info
    1 snmp_uptime

Finally, testing the plugin on the command line.

OMD[monitoring]:~/local/share/check_mk/checks$ cmk -nvp edgerouter Check_MK version 1.4.0p38<br> CPU utilization      OK - 1.5% used                                           (util=1.5;80;90;0;100)<br> Edge Router NICs bandwith eth0 OK - RX: 2.45 Kbps, TX: 6.59 Kbps                        (RX=2.453042;;;; TX=6.589328;;;;)<br>

You can then login to the check_mk graphical interface.

Experiment

As you see below, you can change the WARNING and CRITICAL levels from the WATO.

That’s all for now. Happy monitoring 🙂

Connect KVM over GRE

Hi Folks,

As you may know, Libvirt virtual network switches operates in NAT mode in default (IP Masquerading rather than SNAT or DNAT). In this mode Virtual guests can communicate outside world. But, Computers external to the host can’t initiate communications to the guests inside, when the virtual network switch is operating in NAT mode. One of the solution is creating a virtual switch in Routed-Mode. We have still one more option without changing underlying virtual switch operation mode. The Solution is creating a GRE Tunnel between the hosts.

What is GRE?

GRE (Generic Routing Encapsulation) is a communication protocol that provides virtually point-to-point communication. It is very simple and effective method of transporting data over a public network. You can use GRE tunnel some of below cases.

  • Use of multiple protocols over a single-protocol backbone
  • Providing workarounds for networks with limited hops
  • Connection of non-contiguous subnetworks
  • Being less resource demanding than its alternatives (e.g. IPsec VPN)

Reference: https://www.incapsula.com/blog/what-is-gre-tunnel.html

Example of GRE encapsulation
Reference: https://www.incapsula.com/blog/what-is-gre-tunnel.html

I have created GRE tunnel to connect to some of KVM guests from the external host. It is depicted in the Figure-2 how my topology looks like.

Figure-2 Connecting KVM guests over GRE Tunnel

I have two Physical hosts installed Mint and Ubuntu GNU/Linux distribution. KVM is running on the Ubuntu.

GRE Tunnel configuration on GNU/Linux hosts

Before create a GRE tunnel, we need to add ip_gre module on both GNU/Linux hosts.

mint@mint$ sudo modprobe ip_gre
tesla@otuken:~$ sudo modprobe ip_gre

Configuring Physical interface on both nodes.

mint@mint$ ip addr add 100.100.100.1/24 dev enp0s31f6
tesla@otuken:~$ ip addr add 100.100.100.2/24 dev enp2s0

Configuring GRE Tunnel (On the first node)

mint@mint$ sudo ip tunnel add tun0 mode gre remote 100.100.100.2 local 100.100.100.1 ttl 255
mint@mint$ sudo ip link set tun0 up
mint@mint$ sudo ip addr add 10.0.0.10/24 dev tun0
mint@mint$ sudo ip route add 10.0.0.0/24 dev tun0
mint@mint$ sudo ip route add 192.168.122.0/24 dev tun0

Configuring GRE Tunnel (On the Second Node)

tesla@otuken:~$ sudo ip tunnel add tun0 mode gre remote 100.100.100.1 local 100.100.100.2 ttl 255
tesla@otuken:~$ sudo ip link set tun0 up
tesla@otuken:~$ sudo ip addr add 10.0.0.20/24 dev tun0
tesla@otuken:~$ sudo ip route add 10.0.0.0/24 dev tun0

As GRE protocol adds additional 24 bytes of header, it is highly recommended to set MTU . Recommended MTU value is 1400.

Also do not forget to check iptables rules on both hosts.

Experiment:

Once configuration completed, I successfully ping the KVM guest(192.168.122.35) and transfer a file over SSH(scp). You can download the Wireshark pcap file here.