MaaS mac filtering

Are you using MaaS for bare-metal server or VM deployments? And you need some mac filtering that you want MaaS to ignore dhcp discovery packages which comes from particular MAC address(es) ? If so, you can simply add similar snippet of isc-dhcp configuration to your MaaS. (Settings > Dhcp Snippets) set the scope as Global.

class "black-list" 
{    
	match substring (hardware, 1, 6);      
	ignore booting;
}
subclass "black-list" 00:10:9b:8f:31:71;
subclass "black-list" 00:10:9b:8f:31:78;

After adding, enabling and saving it. MaaS will not send any PXEboot reply to system that has the MAC addresses above.

Happy Deployment 🙂

KVM provisioning with Jenkins and Terraform(Cloud-init)

In previous post, we provisioned the guest on KVM. It is just provisioning the guest from the template… There is no auto configuration of the guest such as hostname, name server, ip configuration that needs to be automated as well.

In order to automate configuration of above settings on boot time. we are going to use cloud-init.

What is cloud-init?

Cloud-init is the service that is installed inside the instance and cloud-config are a set of scripts that are executed as soon as the instance is started. Cloud-config is the language of the scripts that cloud-init knows to execute. Cloud-init runs on Linux workloads; for Microsoft Windows workloads, the equivalent is CloudBase-init which supports the majority of cloud-config parameters. The service on the operating system starts early at boot, retrieves metadata that has been provided from an external provider (metadata) or by direct user data supplied by the user. Reference:

cloud-init is run only the first time that the machine is booted. If cloud-init fails because of syntax errors in the file or doesn’t contain all of the needed directives, such as user credentials, a new instance must be created and launched. Restarting the failed instance with a new cloud-init file will not work. Reference

Lets start with creating a folder structure.

libvirt-cinit
├── cloud_init.cfg
├── libvirt-cinit.tf
├── network_config.cfg
├── terraform.d
│   └── plugins
│   └── linux_amd64
│   └── terraform-provider-libvirt

#libvirt-cinit.tf

provider "libvirt" {
    uri = "qemu+ssh://tesla@oregon.anatolia.io/system"
}

resource "libvirt_volume" "centos71-test" {
  name = "centos71-test"
  format = "qcow2"
  pool = "KVMGuests"
  #qcow2 will be cloud-init compatible qcow2 disk image
  #https://cloud.centos.org/centos/7/images/
  source =  "http://oregon.anatolia.io/qcow2-repo/centos71-cloud.qcow2"
}


data "template_file" "user_data" {
 template = file("${path.module}/cloud_init.cfg")
}

data "template_file" "meta_data" {
 template = file("${path.module}/network_config.cfg")
}

resource "libvirt_cloudinit_disk" "cloudinit" {
  name           = "cloudinit.iso"  
  user_data      = data.template_file.user_data.rendered
  meta_data      = data.template_file.meta_data.rendered
  pool           = "KVMGuests"
}


resource "libvirt_domain" "centos71-test" {
 autostart = "true"
  name   = "centos71-test"
  memory = "2048"
  vcpu   = 2
  running = "true"
  cloudinit = libvirt_cloudinit_disk.cloudinit.id

  network_interface {
      network_name = "default"
  }

 disk {
       #scsi = "true"
       volume_id = libvirt_volume.centos71-test.id
  }
console {
    type        = "pty"
    target_type = "virtio"
    target_port = "1"
  }

}

cloud_init.cfg

it is a yaml file (AKA user-data) . Which contains all guest related settings. Such as hostname, users, ssh-keys etc. this file injected into the cloudinit ISO

it MUST be start with the section #cloud-config.

#cloud-config
hostname: centos71-test
fqdn: centos71-test.anatolia.io
manage_etc_hosts: True
users:
  - name: gokay
    sudo: ALL=(ALL) NOPASSWD:ALL
    gecos: Gokay IT user
    lock_passwd: False
    ssh_pwauth: True
    chpasswd: { expire: False }
    passwd: $6$kOh5qSjBzbBfw9Rz$Y5bxvHvA637lSmancdrc17072tdVTpuk8hJ9CX4GV8pvvZXQ/Bv3y8ljY9KjJtLPg6hsyrqe4OHdvAlzFKae/0 
    shell: /bin/bash
  - name: eda
    gecos: Eda SEC user
    lock_passwd: False
    ssh_pwauth: True
    chpasswd: { expire: False }
    passwd: $6$llELUDypCmO.oC9Q$QjykXeZQxJ7hxndJaIQMvxewG3Mb05zHn5NlA8Nf17wd5FXLr6W3e3V.bhHVNmVL3nBGirGPy66FrEV88cI2Q0 
    shell: /bin/bash
runcmd:
  - ifdown eth0
  - sleep 1
  - ifup eth0
#passwords
#gokay:gokay123i
#eda:edaa

network_config.cfg(Static IP)

it is a another configuration for the network configuration. For Red Hat GNU/Linux netplan is not the service. Because of that we are going to user the format below. One thing this is very important according to the Red Hat that configuration must be put into the meta-data. Because of the bug in the cloud-init.

#for virtio nic ethX
#for en100X nic ensX
network-interfaces: |
  auto eth0
  iface eth0 inet static
  address 192.168.122.102
  network 192.168.122.0
  netmask 255.255.255.0
  broadcast 192.168.122.255
  gateway 192.168.122.1
  dns-nameservers 192.168.122.1 8.8.8.8
#bootcmd:
#  - ifdown eth0
#  - ifup eth0

How do I set up a static networking configuration?

Add a network-interfaces section to the meta-data file. This section contains the usual set of networking configuration options.

Because of a current bug in cloud-init, static networking configurations are not automatically started. Instead the default DHCP configuration remains active. A suggested work around is to manually stop and restart the network interface via the bootcmd directive. Reference

As we are using Terraform, it will generate the iso media by injecting user-data and meta-data. if you need to do it manually you can also use the command below.

sudo cloud-localds -v config.iso config.yaml network.cfg

Above sample command creates a config.iso by injecting config.yaml and network.cfg.

You need to install cloud-utils package in order to use cloud-localds

Experiments:

Jenkinsfile configured to get from the git repository. It is the same as previous Jenkinsfile. I could not figure out how to pull the binary file on git. Because of that, I used fixed path for the Terraform provider which is binary file.

pipeline{
    agent {label 'master'}
    stages{
        stage('TF Init'){
            steps{
                sh '''
                cd /data/projects/terraform/libvirt-cinit
                terraform init
                '''
            }
        }
        stage('TF Plan'){
            steps{
                sh '''
                cd /data/projects/terraform/libvirt-cinit
                terraform plan -out deploy
                '''
            }
        }

        stage('Approval') {
            steps {
                script {
                    def userInput = input(id: 'Deploy', message: 'Do you want to deploy?', parameters: [ [$class: 'BooleanParameterDefinition', defaultValue: false, description: 'Apply terraform', name: 'confirm'] ])
                }
           }
        }
        stage('TF Apply'){
            steps{
                sh '''
                cd /data/projects/terraform/libvirt-cinit
                terraform apply deploy
                '''
            }
        }
    }
}

As you may see, we successfully set IP configuration of the virtual-guest.

KVM provisioning with Jenkins and Terraform

In this CI/CD activities we are going to provisioning a virtual guest on KVM host using Terraform and Jenkins.

Officially, Terraform does not have provider for KVM. But we are going to use third party provider for this.

I assume that host that runs terraform able to connect to the KVM hypervisor over ssh without password authentication. To do so, We already configured KVM hypervisor to logged in with SSH private key.

terraformm-provider-libvirt is a compiled binary, which needs to be put into the <PROJECT FOLDER>/terraform.d/plugins/linux_amd64 folder. You can find the compiled releases here or you can compile by yourself from the source. We are going to create following folder structure for Terraform.

libvirt/
├── libvirt.tf
├── terraform.d
│   └── plugins
│   └── linux_amd64
│   └── terraform-provider-libvirt

#libvirt.tf
provider "libvirt" {
    uri = "qemu+ssh://tesla@oregon.anatolia.io/system"
}

resource "libvirt_volume" "centos7-test" {
  name = "centos7-test"
  format = "qcow2"
  pool = "KVMGuests"
  source =  "http://oregon.anatolia.io/qcow2-repo/centos71.qcow2"

}

resource "libvirt_domain" "centos7-test" {
 autostart = "true"
  name   = "centos7-test"
  memory = "2048"
  vcpu   = 2
  running = "true"
  network_interface{
      hostname = "centos7-test"
      network_name = "default"
  }

 disk {
       volume_id = "${libvirt_volume.centos7-test.id}"
  }
console {
    type        = "pty"
    target_type = "virtio"
    target_port = "1"
  }

}

Creating a Pipeline in Jenkins

In this section Pipeline script has been created that needs to be defined in the Pipeline.

pipeline{
    agent {label 'master'}
    stages{
        stage('TF Init'){
            steps{
                sh '''
                cd /data/projects/terraform/libvirt
                terraform init
                '''
            }
        }
        stage('TF Plan'){
            steps{
                sh '''
                cd /data/projects/terraform/libvirt
                terraform plan -out createkvm
                '''
            }
        }

        stage('Approval') {
            steps {
                script {
                    def userInput = input(id: 'Approve', message: 'Do you want to Approve?', parameters: [ [$class: 'BooleanParameterDefinition', defaultValue: false, description: 'Apply terraform', name: 'confirm'] ])
                }
           }
        }
        stage('TF Apply'){
            steps{
                sh '''
                cd /data/projects/terraform/libvirt
                terraform apply createkvm
                '''
            }
        }
    }
}




You may realize that domain name is centos7-test but the hostname is centos71. this is because I used a one of the template that I already created before. Address of the template defined in the source section of libvirt.tf file. In the next post, I will integrate it with the cloud-init which allows machine to setup at first boot. By doing that even machine customization will be done automatically.