Provisioning a VM on KVM via Kickstart using virt-install

virt-install is a command line tool for creating new KVM , Xen, or Linux container guests using the “libvirt” hypervisor management library. It is one of the quickest way to deploy a vm from the command line. In this post I will also show you to install CentOS on KVM via kickstart. In this installation instead of choosing native GNU/Linux bridge we are using Open Vswitch.

For your environment, I am assuming that you already configured your dhcp, dns and http server environment for the pxeboot. I am using cobbler for dhcp server management. I am provisioning CentOS machines for installation of Kubernetes Cluster nodes. As I use remote KVM host, user tesla has to able to connect with SSH key authentication and the user tesla has to be in the group libvirt.

Provisioning script.

virt-install \
--connect qemu+ssh://tesla@192.168.122.1/system \
--name k8s-master \
--ram 2048 \
--disk bus=virtio,pool=KVMGuests,size=15,format=qcow2 \
--network network=OVS0,model=virtio,virtualport_type=openvswitch,portgroup=VLAN100 \
--vcpus 2 \
--os-type linux \
--location http://cobbler.manintheit.org/cblr/links/CentOS7-x86_64 \
--os-variant rhel7 \
--extra-args="ks=http://10.5.100.253/k8s/k8s-master-centos7.ks ksdevice=eth0 ip=10.5.100.15 netmask=255.255.255.0 dns=10.5.100.253 gateway=10.5.100.254" 
--location Distribution tree installation source. virt-install can recognize certain distribution trees and fetches a bootable kernel/initrd pair to launch the install.

k8s-master-centos7.ks

install
text
eula --agreed
url --url=http://10.5.100.253/cblr/links/CentOS7-x86_64/
lang en_US.UTF-8
keyboard us
network --onboot=on --bootproto=static  --ip 10.5.100.15 --netmask 255.255.255.0 --gateway 10.5.100.254 --nameserver 10.5.100.253 --device=eth0 --hostname k8s-master.manintheit.org
rootpw root
firewall --disabled
selinux --permissive
timezone Europe/Berlin
skipx
zerombr
clearpart --all --initlabel
part /boot --fstype ext4 --size=512
part /     --fstype ext4 --size=1 --grow
authconfig --enableshadow --passalgo=sha512
services --enabled=NetworkManager,sshd
reboot
user --name=tesla --plaintext --password tesla --groups=tesla,wheel

#repo --name=base --baseurl=http://mirror.centos.org/centos/7.3.1611/os/x86_64/
#repo --name=epel-release --baseurl=http://anorien.csc.warwick.ac.uk/mirrors/epel/7/x86_64/
#repo --name=elrepo-kernel --baseurl=http://elrepo.org/linux/kernel/el7/x86_64/
#repo --name=elrepo-release --baseurl=http://elrepo.org/linux/elrepo/el7/x86_64/
#repo --name=elrepo-extras --baseurl=http://elrepo.org/linux/extras/el7/x86_64/

%packages --ignoremissing --excludedocs
@Base
%end

%post
yum update -y
yum install -y sudo
echo "tesla        ALL=(ALL)       NOPASSWD: ALL" >> /etc/sudoers.d/tesla
sed -i "s/^.*requiretty/#Defaults requiretty/" /etc/sudoers
/bin/echo 'UseDNS no' >> /etc/ssh/sshd_config
yum clean all
/bin/sh -c 'echo "br_netfilter" > /etc/modules-load.d/br_netfilter.conf'
modprobe br_netfilter
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

#Enable kubernetes repo
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
%end

Creating VLANs on KVM with OpenVswitch

VLAN is a crucial L2 network technology for increasing broadcast domain at the end it gives you better network utilization and security. If you are familiar with vmWare technology you can create a port group on a dVS or Standard switch. But If you need to segregate your network on KVM hypervisor, you need some other packages . In this tutorial I will show you how to create VLANs by using openvswitch and integrating it to KVM.

For this post, I assume that you already had openvswitch installed on your system. If not, follow here. I am also assuming that you have a physical NIC to bridge it to your virtual bridge(switch) which is created via openvswitch. By doing that you can connect to the outside world.

tesla@ankara:~$ sudo ovs-vsctl -V
ovs-vsctl (Open vSwitch) 2.12.0
DB Schema 8.0.0

Creating a Virtual Bridge with Openvswitch

$ sudo ovs-vsctl add-br OVS0 

Adding Physcical NIC to OVS0 Bridge

sudo ovs-vsctl add-port OVS0 enp0s31f6

In order to integrate the bridge which is created by openvswitch to KVM, we need to create XML configuration file which needed to be defined on KVM. You can see my configuration below.

<network>
 <name>OVS0</name>
 <forward mode='bridge'/>
 <bridge name='OVS0'/>
 <virtualport type='openvswitch'/>
 <portgroup name='VLAN10'>
   <vlan>
     <tag id='10'/>
   </vlan>
 </portgroup>
 <portgroup name='VLAN20'>
   <vlan>
     <tag id='20'/>
   </vlan>
 </portgroup>
 <portgroup name='VLAN30'>
   <vlan>
     <tag id='30'/>
   </vlan>
 </portgroup>
  <portgroup name='VLAN40'>
   <vlan>
     <tag id='40'/>
   </vlan>
 </portgroup>
<portgroup name='VLAN99'>
   <vlan>
     <tag id='99'/>
   </vlan>
 </portgroup>
 <portgroup name='VLAN100'>
   <vlan>
     <tag id='100'/>
   </vlan>
 </portgroup>
<portgroup name='TRUNK'>
   <vlan trunk='yes'>
     <tag id='10'/>
     <tag id='20'/>
     <tag id='30'/>
     <tag id='40'/>
     <tag id='99'/>
     <tag id='100'/>
   </vlan>
 </portgroup>
</network>

As per XML configuration above, we are creating a VLAN ID: 10, 20, 30, 40, 99 and 100.

Defining the configuration with virsh

virsh # net-define --file OVS0.xml 
Network OVS0 defined from OVS0.xml
virsh # net-autostart --network OVS0
Network OVS0 marked as autostarted
virsh # net-list 
 Name      State    Autostart   Persistent
--------------------------------------------
 default   active   yes         yes
 OVS0      active   yes         yes

After defining it, you will see that your XML file modified by KVM with uuid.

<!--
WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE
OVERWRITTEN AND LOST. Changes to this xml configuration should be made using:
  virsh net-edit OVS0
or other application using the libvirt API.
-->

<network>
  <name>OVS0</name>
  <uuid>a38bdd43-7fba-4e23-98f1-8c0ab83cff2c</uuid>
  <forward mode='bridge'/>
  <bridge name='OVS0'/>
  <virtualport type='openvswitch'/>
  <portgroup name='VLAN10'>
    <vlan>
      <tag id='10'/>
    </vlan>
  </portgroup>
  <portgroup name='VLAN20'>
    <vlan>
      <tag id='20'/>
    </vlan>
  </portgroup>
  <portgroup name='VLAN30'>
    <vlan>
      <tag id='30'/>
    </vlan>
  </portgroup>
  <portgroup name='VLAN40'>
    <vlan>
      <tag id='40'/>
    </vlan>
  </portgroup>
  <portgroup name='VLAN99'>
    <vlan>
      <tag id='99'/>
    </vlan>
  </portgroup>
  <portgroup name='VLAN100'>
    <vlan>
      <tag id='100'/>
    </vlan>
  </portgroup>
  <portgroup name='TRUNK'>
    <vlan trunk='yes'>
      <tag id='10'/>
      <tag id='20'/>
      <tag id='30'/>
      <tag id='40'/>
      <tag id='99'/>
      <tag id='100'/>
    </vlan>
  </portgroup>
</network>

Experiments

Let’s check on virt-manager if we are able to see the port groups.

Capturing Packages with Wireshark on Pyhiscal NIC that connected to th e OVS0

Connect KVM over GRE

Hi Folks,

As you may know, Libvirt virtual network switches operates in NAT mode in default (IP Masquerading rather than SNAT or DNAT). In this mode Virtual guests can communicate outside world. But, Computers external to the host can’t initiate communications to the guests inside, when the virtual network switch is operating in NAT mode. One of the solution is creating a virtual switch in Routed-Mode. We have still one more option without changing underlying virtual switch operation mode. The Solution is creating a GRE Tunnel between the hosts.

What is GRE?

GRE (Generic Routing Encapsulation) is a communication protocol that provides virtually point-to-point communication. It is very simple and effective method of transporting data over a public network. You can use GRE tunnel some of below cases.

  • Use of multiple protocols over a single-protocol backbone
  • Providing workarounds for networks with limited hops
  • Connection of non-contiguous subnetworks
  • Being less resource demanding than its alternatives (e.g. IPsec VPN)

Reference: https://www.incapsula.com/blog/what-is-gre-tunnel.html

Example of GRE encapsulation
Reference: https://www.incapsula.com/blog/what-is-gre-tunnel.html

I have created GRE tunnel to connect to some of KVM guests from the external host. It is depicted in the Figure-2 how my topology looks like.

Figure-2 Connecting KVM guests over GRE Tunnel

I have two Physical hosts installed Mint and Ubuntu GNU/Linux distribution. KVM is running on the Ubuntu.

GRE Tunnel configuration on GNU/Linux hosts

Before create a GRE tunnel, we need to add ip_gre module on both GNU/Linux hosts.

mint@mint$ sudo modprobe ip_gre
tesla@otuken:~$ sudo modprobe ip_gre

Configuring Physical interface on both nodes.

mint@mint$ ip addr add 100.100.100.1/24 dev enp0s31f6
tesla@otuken:~$ ip addr add 100.100.100.2/24 dev enp2s0

Configuring GRE Tunnel (On the first node)

mint@mint$ sudo ip tunnel add tun0 mode gre remote 100.100.100.2 local 100.100.100.1 ttl 255
mint@mint$ sudo ip link set tun0 up
mint@mint$ sudo ip addr add 10.0.0.10/24 dev tun0
mint@mint$ sudo ip route add 10.0.0.0/24 dev tun0
mint@mint$ sudo ip route add 192.168.122.0/24 dev tun0

Configuring GRE Tunnel (On the Second Node)

tesla@otuken:~$ sudo ip tunnel add tun0 mode gre remote 100.100.100.1 local 100.100.100.2 ttl 255
tesla@otuken:~$ sudo ip link set tun0 up
tesla@otuken:~$ sudo ip addr add 10.0.0.20/24 dev tun0
tesla@otuken:~$ sudo ip route add 10.0.0.0/24 dev tun0

As GRE protocol adds additional 24 bytes of header, it is highly recommended to set MTU . Recommended MTU value is 1400.

Also do not forget to check iptables rules on both hosts.

Experiment:

Once configuration completed, I successfully ping the KVM guest(192.168.122.35) and transfer a file over SSH(scp). You can download the Wireshark pcap file here.

Windows Server 2012 R2 hangs on the KVM hypervisor

Windows Server 2012 R2 hangs during the installation on KVM hypervisor on Ubuntu . Windows Server 2012 R2 hangs in the installation step.

Version of the Virtual Host:

gokay@ankara:~$ lsb_release -a
No LSB modules are available.
Distributor ID:	Ubuntu
Description:	Ubuntu 16.04.3 LTS
Release:	16.04
Codename:	xenial

Solution:

      Check your Virtual Disk Bus. And change it to SATA.

 

 

 

 

 

 

I finally installed Windows Server 2012 R2 successfully. But  Virtual NIC was not detected.

Solution:

    Check your Virtual Network Interface Device model. It should be rtl8139. Virtio did not work for me.

 

 

VLAN Creation on KVM-I

Creating a VLAN on KVM requires more raw networking knowledge in comparison to VMware world. KVM requires some Linux networking knowledge beside general understanding of computer networks. I did  not see more information on the Internet about that. To fill this gap I write this post. 🙂 Actually there are more than one methods creating a VLAN on KVM. In this post I will show the first method.  In this method  sub-interfaces are created in the bridge NOT in the physical NIC interface. Doing so, Vlan tags  wont be stripped-off or Vlan tags wont be embedded in the physical interface but bridge. I use Ubuntu 16.04 for the KVM host. In this post, I will use KVM and libvirt interchangeable.

root@ankara:~# lsb_release -a
No LSB modules are available.
Distributor ID:	Ubuntu
Description:	Ubuntu 16.04.3 LTS
Release:	16.04
Codename:	xenial

Loading 8021q module

First thing we need to load 8021q module on the KVM host in order to encapsulate or d-encapsulate IEEE 802.1Q type Vlan.

root@ankara:~# modprobe 8021q

To load module automatically on boot. Create a file 8021q.conf in the /etc/modules-load.d/ and add 8021q

root@ankara:~# cat /etc/modules-load.d/8021q.conf 
8021q

Creating a Bridge(s)

In order to create a vlan(trunk and access ) we need to crate bridge(s) and tell the system tag the frames. In Linux, to create tagged frame we use vconfig command. We need to install vlan package to use it.

root@ankara:~# apt-get install vlan

Note: Creating a bridge with this method is NOT persistent. To make it persistent you need do add configuration to /etc/network/interfaces file. Because of plenty of tutorials about that, I do not explain it here.

root@ankara:~# brctl addbr br0
root@ankara:~# vconfig add br0 30 #subinterface(vlan30)
root@ankara:~# vconfig add br0 40 #subinterface(vlan40)
root@ankara:~# brctl addbr vlan30
root@ankara:~# brctl addbr vlan40
root@ankara:~# brctl addif vlan30 br0.30 #vlan30
root@ankara:~# brctl addif vlan40 br0.40 #vlan40

You can see network interfaces after creating bridges and sub-interfaces on KVM Host.

root@ankara:~# ip link show 
...(omitted some output)
11: br0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default qlen 1000
    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
12: br0.30@br0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue master vlan30 state LOWERLAYERDOWN mode DEFAULT group default qlen 1000
    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
13: br0.40@br0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue master vlan40 state LOWERLAYERDOWN mode DEFAULT group default qlen 1000
    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
14: vlan30: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default qlen 1000
    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
15: vlan40: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default qlen 1000
    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff

We also need to up links.

root@ankara:/etc/libvirt/qemu/networks# ip link set br0 up
root@ankara:/etc/libvirt/qemu/networks# ip link set br0.30 up
root@ankara:/etc/libvirt/qemu/networks# ip link set br0.40 up
root@ankara:/etc/libvirt/qemu/networks# ip link set vlan30 up
root@ankara:/etc/libvirt/qemu/networks# ip link set vlan40 up

Final bridge status

root@ankara:~# brctl show
bridge name	bridge id		STP enabled	interfaces
br0		8000.000000000000	no			
vlan30		8000.000000000000	no		br0.30
vlan40		8000.000000000000	no		br0.40

Defining Bridges on KVM.

After creating bridges, We also need to define bridges to our hypervisor to use it. I will create three configuration files for the br0, vlan30 and vlan40 successively in the /etc/libvirt/qemu/networks folder.

br0.xml

<network>
  <name>br0</name>
  <forward mode='bridge'/>
  <bridge name='br0'/>
</network>

vlan30.xml

<network>
  <name>vlan30</name>
  <forward mode='bridge'/>
  <bridge name='vlan30'/>
</network>

vlan40.xml

<network>
  <name>vlan40</name>
  <forward mode='bridge'/>
  <bridge name='vlan40'/>
</network>

 

root@ankara:/etc/libvirt/qemu/networks# virsh net-define br0.xml
root@ankara:/etc/libvirt/qemu/networks# virsh net-define vlan30.xml
root@ankara:/etc/libvirt/qemu/networks# virsh net-define vlan40.xml
root@ankara:/etc/libvirt/qemu/networks# virsh net-start br0
root@ankara:/etc/libvirt/qemu/networks# virsh net-start vlan30
root@ankara:/etc/libvirt/qemu/networks# virsh net-start vlan40
#to auto start on boot.
root@ankara:/etc/libvirt/qemu/networks# virsh net-autostart br0
root@ankara:/etc/libvirt/qemu/networks# virsh net-autostart vlan30
root@ankara:/etc/libvirt/qemu/networks# virsh net-autostart vlan40

Checking bridges

root@ankara:/etc/libvirt/qemu/networks# virsh  net-list
 Name                 State      Autostart     Persistent
----------------------------------------------------------
 br0                  active     yes           yes
 vlan30               active     yes           yes
 vlan40               active     yes           yes

So far so good ?

You may confuse due to the fact that we did many things so far. I hope below figure gives you a better understanding what we did so far. It is depicted below figure how our network looks like. Only think that I did not do is adding physical interface to bridge br0. In this post, KVM guests will not connect to the Internet. According to this design we do not need to setup any VLAN configuration on  the KVM virtual guests. It has all handled by br0.30 and br0.40–Any outgoing packet from VLAN30 network will be tagged by the br0.30 sub-interface. Any incoming tagged packet to VLAN30 network will be stripped-off by the br0.30 sub-interface. It is the same as VLAN40 network.

 

 

 

 

 

 

 

 

 

 

 

Experiments

I captured the packages on br0.30 interface and br0 bridges to check, if  vlans works as expected.

Output br0.30(we see incoming tagged icmp request stripped-off by the br0.30 we see untagged frames)

root@ankara:/etc/libvirt/qemu/networks# tcpdump -i br0.30 -e
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on br0.30, link-type EN10MB (Ethernet), capture size 262144 bytes
15:06:46.125189 70:54:d2:99:56:c0 (oui Unknown) > 52:54:00:43:40:b7 (oui Unknown), ethertype IPv4 (0x0800), length 98: 172.16.30.50 > 172.16.30.10: ICMP echo request, id 2453, seq 520, length 64
15:06:46.125429 52:54:00:43:40:b7 (oui Unknown) > 70:54:d2:99:56:c0 (oui Unknown), ethertype IPv4 (0x0800), length 98: 172.16.30.10 > 172.16.30.50: ICMP echo reply, id 2453, seq 520, length 64
15:06:47.149216 70:54:d2:99:56:c0 (oui Unknown) > 52:54:00:43:40:b7 (oui Unknown), ethertype IPv4 (0x0800), length 98: 172.16.30.50 > 172.16.30.10: ICMP echo request, id 2453, seq 521, length 64
15:06:47.149530 52:54:00:43:40:b7 (oui Unknown) > 70:54:d2:99:56:c0 (oui Unknown), ethertype IPv4 (0x0800), length 98: 172.16.30.10 > 172.16.30.50: ICMP echo reply, id 2453, seq 521, length 64

 

Output on the br0 (we see tagged 802.1q encapsulation vlan30)

root@ankara:/etc/libvirt/qemu/networks# tcpdump -i br0 -e
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on br0, link-type EN10MB (Ethernet), capture size 262144 bytes
15:06:58.413319 70:54:d2:99:56:c0 (oui Unknown) > 52:54:00:43:40:b7 (oui Unknown), ethertype 802.1Q (0x8100), length 102: vlan 30, p 0, ethertype IPv4, 172.16.30.50 > 172.16.30.10: ICMP echo request, id 2453, seq 532, length 64
15:06:58.413564 52:54:00:43:40:b7 (oui Unknown) > 70:54:d2:99:56:c0 (oui Unknown), ethertype 802.1Q (0x8100), length 102: vlan 30....

 

According to the figure above, hosts on the VLAN30 and hosts on the VLAN40 can not communicate each other as we do not have L3 device for Inter Vlan Routing. Next post, I will provision virtual L3 device which will be VyOS(Vyatta) on the KVM. I will add two network interfaces on it. –Connect one interface to the br0(trunk port) and  the other interface  to the physical NIC for the Internet connection.

Adding Shared disk on KVM

Hello,

In this very short post, I will share with you how to create a shared disk on KVM host ugliest way :). It is prerequisite for the next  post. In the next topic, I will share with you how to build a two-node cluster on RedHat. I will use latest version of CentOS6 for the OS.

 

 

 

 

 

 

 

 

 

Mounting QCOW2 Disk

QCOW2 is a disk image format which used by QEMU virtualization software.

Install Software:

root@ankara:~# apt-get install libguestfs-tools

Create a directory for the mount point.

root@ankara:/data/rhcevms# mkdir /mnt/qcow2

Mounting qcow2 disk

If you do not know the disk partition name you can use bogus partition name. In this case I used “asssd”. guestmount showed me available partitions on the disk. I mounted qcow2 read-only mode. Do not mount running virtual system disk image as you most probably destroy the disk image.

root@ankara:/data/rhcevms# guestmount -a centos7.qcow2 -m /dev/asssd --ro /mnt/qcow2/
libguestfs: error: mount_options: mount_options_stub: /dev/asssd: No such file or directory
guestmount: '/dev/asssd' could not be mounted.
guestmount: Did you mean to mount one of these filesystems?
guestmount: 	/dev/sda1 (xfs)
guestmount: 	/dev/cl/root (xfs)
guestmount: 	/dev/cl/swap (swap)

I choose /dev/cl/root (lvm disk)

root@ankara:/data/rhcevms# guestmount -a centos7.qcow2 -m /dev/cl/root --ro /mnt/qcow2/
root@ankara:/data/rhcevms# cd /mnt/qcow2/
root@ankara:/mnt/qcow2# ls
bin  boot  dev  etc  home  lib  lib64  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var
root@ankara:/mnt/qcow2# cat etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
root@ankara:/mnt/qcow2# cat etc/resolv.conf 
# Generated by NetworkManager
search sfp.local
root@ankara:/mnt/qcow2# cat etc/hostname 
centos7.sfp.local
root@ankara:/mnt/qcow2#

Umounting the disk

root@ankara:/data/rhcevms# cd
root@ankara:~# umount /mnt/qcow2