Hourly logrotation

Sometimes, you need to rotate your logs hourly instead of daily or weekly, if you have a big virtual environment many things needs to be logged. Sometimes daily logs are so huge that you need hourly log-rotation. For this one you need to customize some of the settings in your central syslog server. You can find the sample steps below to create logrotate configuration that rotates the logs hourly.

Steps:

1 – Copy /etc/cron.daily/logrotate to /etc/cron.hourly and set it as executable.

# cp /etc/cron.daily/logrotate /etc/cron.hourly
# chmod u+x /etc/cron.hourly/logrotate

2- Create a folder logrotate.hourly.conf in /etc

# mkdir -p /etc/logrotate.hourly.conf

3- Modify the file logrotate in the /etc/cron.hourly based on your needs. See below for sample.

#!/bin/sh
/usr/sbin/logrotate /etc/logrotate.hourly.conf/example
EXITVALUE=$?
if [ $EXITVALUE != 0 ]; then
    /usr/bin/logger -t logrotate "ALERT exited abnormally with [$EXITVALUE]"
fi
exit $EXITVALUE

3- Create your logrotate script in the folder /etc/logrotate.hourly.conf

For this post, we named it as ‘example’. (We also specified it in the configuration /etc/logrotate.hourly.conf/logrotate)

/opt/logs/[2-9][0-9][0-9][0-9]/[0-9][0-9]/[0-9][0-9]/*.log {
    notifempty
    compress
    maxage 60
    rotate 200
    create 0600 root root
    size 4G
    postrotate
        /usr/bin/systemctl reload syslog-ng > /dev/null
    endscript
}

  • You may need to tune up rotate, size and maxage options base on your needs.

Creating a Image for MaaS with Packer

In this post we are going to build an image with Packer which will be used to deploy via MaaS. After image built and uploaded to MaaS, it can be used to provision virtual machine or deploy OS on Bare-Metal machines. In order to build an image that is deployable with MaaS, we need couple of files which you can clone here.

For this post, It will be created minimal CentOS7 image including httpd package to test. One of the cool thing is with Packer that you can also run execute your Ansible playbook inside the machine being provisioned by Packer. Ansible(remote) provisioner is used to configure ntp server and install httpd.

As Qemu used as a builder, qemu-system-x64 has to be installed on the host where Packer runs. Packer will create an qcow2 image after successful image creation. But we are going to use tar.gz image file as we deploy image via MaaS.

centos7.json (do not forget to change iso_url in accordance with your environment.)

{
    "builders": [
        {
            "type": "qemu",
	    "iso_url": "/home/tesla/packer/centos7/isos/CentOS-7-x86_64-NetInstall-2003.iso",
            "iso_checksum_type": "sha256",
	    "iso_checksum": "101bc813d2af9ccf534d112cbe8670e6d900425b297d1a4d2529c5ad5f226372",
            "boot_command": [
                "<tab> ",
                "inst.ks=http://{{ .HTTPIP }}:{{ .HTTPPort }}/centos7.ks ",
                "<enter>"
            ],
	    "ssh_username": "tesla",
	    "ssh_password": "tesla",
	    "ssh_wait_timeout": "12000s",
            "boot_wait": "3s",
            "disk_size": "4G",
	    "display": "none",
            "headless": false,
            "memory": 4096,
	    "accelerator": "kvm",
	    "cpus": 4,
            "http_directory": "http",
            "shutdown_timeout": "20m",
	    "disk_interface": "virtio",
            "format": "qcow2",
            "net_device": "virtio-net"

        }
    ],

"post-processors": [
        {
            "type": "shell-local",
            "inline_shebang": "/bin/bash -e",
            "inline": [
                "TMP_DIR=$(mktemp -d /tmp/packer-maas-XXXX)",
                "echo 'Mounting image...'",
                "modprobe nbd",
                "qemu-nbd -d /dev/nbd4",
                "qemu-nbd -c /dev/nbd4 -n output-qemu/packer-qemu",
                "echo 'Waiting for partitions to be created...'",
                "tries=0",
                "while [ ! -e /dev/nbd4p1 -a $tries -lt 60 ]; do",
                "    sleep 1",
                "    tries=$((tries+1))",
                "done",
                "echo 'Tarring up image...'",
                "mount /dev/nbd4p1 $TMP_DIR",
                "tar -Sczpf centos7.tar.gz --selinux -C $TMP_DIR .",
                "echo 'Unmounting image...'",
                "umount $TMP_DIR",
                "qemu-nbd -d /dev/nbd4",
                "rmdir $TMP_DIR"
            ]
        }
    ],
    "provisioners": [
    {
  "type": "shell",
  "pause_before": "5s",
  "inline": [
	"sudo yum -y install epel-release",
	"sudo yum -y update",
	"sudo yum -y remove cloud-init",
	"sudo yum -y install python-jsonschema python-devel",
	"sudo yum -y install cloud-init --disablerepo=* --enablerepo=group_cloud-init-el-stable",
	"sudo yum -y install qemu-guest-agent wget"
  ]
},
    {
      "user": "tesla",
      "type": "ansible",
      "playbook_file": "./ansible/main.yml"
    },
    {
     "type": "shell",
     "inline": [
     
	"sudo systemctl enable cloud-init",
	"sudo rm -rf /var/lib/cloud/",
	"sudo rm -rf /etc/cloud/cloud-init.disabled",
	"sudo /usr/bin/truncate -s 0 /etc/fstab",
	"sudo /usr/bin/truncate -s 0 /etc/resolv.conf",
        "sudo rm -f /etc/sysconfig/network-scripts/ifcfg-[^lo]*",
        "sudo sync"
  ]
}
    ]
}
#main.yml
---
- hosts: default
  become: yes
  roles:
    - configure_httpd
    - configure_chrony

centos7.ks

url --mirrorlist="http://mirrorlist.centos.org/?release=7&arch=x86_64&repo=os"
text
reboot
firewall --enabled --service=ssh,http
firstboot --disable
ignoredisk --only-use=vda
lang en_US.UTF-8
keyboard us
network --bootproto=dhcp --hostname=packer-maas.manintheit.org
selinux --enforcing
timezone UTC --isUtc
bootloader --location=mbr --driveorder="vda" --timeout=1
rootpw --plaintext root1234
user --name=tesla --groups=wheel --plaintext --password=tesla


repo --name="Base" --mirrorlist="http://mirrorlist.centos.org/?release=7&arch=x86_64&repo=os"
repo --name="Updates" --mirrorlist="http://mirrorlist.centos.org/?release=7&arch=x86_64&repo=updates"
repo --name="Extras" --mirrorlist="http://mirrorlist.centos.org/?release=7&arch=x86_64&repo=extras"
repo --name="cloud-init" --baseurl="http://copr-be.cloud.fedoraproject.org/results/@cloud-init/el-stable/epel-7-x86_64"

zerombr
clearpart --all --initlabel
part / --size=1 --grow --asprimary --fstype=ext4


%packages
@core
#httpd
bash-completion
cloud-init-el-release
cloud-init
# cloud-init only requires python-oauthlib with MAAS. As such upstream
# has removed python-oauthlib from cloud-init's deps.
python2-oauthlib
cloud-utils-growpart
rsync
tar
yum-utils
# bridge-utils is required by cloud-init to configure networking. Without it
# installed cloud-init will try to install it itself which will not work in
# isolated environments.
bridge-utils
# Tools needed to allow custom storage to be deployed without acessing the
# Internet.
grub2-efi-x64
shim-x64
# Older versions of Curtin do not support secure boot and setup grub by
# generating grubx64.efi with grub2-efi-x64-modules.
grub2-efi-x64-modules
efibootmgr
dosfstools
lvm2
mdadm
device-mapper-multipath
iscsi-initiator-utils
-plymouth
# Remove ALSA firmware
-a*-firmware
# Remove Intel wireless firmware
-i*-firmware
%end


%post --erroronfail
systemctl disable cloud-init
touch /etc/cloud/cloud-init.disabled
yum install -y sudo
echo "tesla        ALL=(ALL)       NOPASSWD: ALL" >> /etc/sudoers.d/tesla
sed -i "s/^.*requiretty/#Defaults requiretty/" /etc/sudoers
yum clean all
%end

Validating Packer config.

sudo PACKER_LOG=1 packer validate centos7.json

After successful validation, we can start building machine image.

sudo PACKER_LOG=1 packer build centos7.json

Once packer finished successfully, centos7.tar.gz file should be created, which will be uploaded to MaaS to provision VMs or Install OS in Bare-Metal servers.

Uploading image to MaaS.

maas $PROFILE boot-resources create name="centos/centos7Packer1" architecture=amd64/generic content@=centos7.tar.gz

Important Takeaways on Packer.

Do not forget to issue sync command inside machine provisioned by Packer. Otherwise, you will experience with the empty systemd unit files. So, sync command will flush data from cache to disk.

As MaaS uses cloud-init to setup various host settings such as partitioning disk, dns server, network configuration etc. Some of the files are removed or truncated prior to image creation. Because MaaS will populate these configurations during the first boot of the system. If you deploy the OS via MaaS, configure dns setting on the MaaS, it will be anyway overwritten by cloud-init even, you configure resolv.conf inside the image.

Provisioning a VM on KVM via Kickstart using virt-install

virt-install is a command line tool for creating new KVM , Xen, or Linux container guests using the “libvirt” hypervisor management library. It is one of the quickest way to deploy a vm from the command line. In this post I will also show you to install CentOS on KVM via kickstart. In this installation instead of choosing native GNU/Linux bridge we are using Open Vswitch.

For your environment, I am assuming that you already configured your dhcp, dns and http server environment for the pxeboot. I am using cobbler for dhcp server management. I am provisioning CentOS machines for installation of Kubernetes Cluster nodes. As I use remote KVM host, user tesla has to able to connect with SSH key authentication and the user tesla has to be in the group libvirt.

Provisioning script.

virt-install \
--connect qemu+ssh://tesla@192.168.122.1/system \
--name k8s-master \
--ram 2048 \
--disk bus=virtio,pool=KVMGuests,size=15,format=qcow2 \
--network network=OVS0,model=virtio,virtualport_type=openvswitch,portgroup=VLAN100 \
--vcpus 2 \
--os-type linux \
--location http://cobbler.manintheit.org/cblr/links/CentOS7-x86_64 \
--os-variant rhel7 \
--extra-args="ks=http://10.5.100.253/k8s/k8s-master-centos7.ks ksdevice=eth0 ip=10.5.100.15 netmask=255.255.255.0 dns=10.5.100.253 gateway=10.5.100.254" 
--location Distribution tree installation source. virt-install can recognize certain distribution trees and fetches a bootable kernel/initrd pair to launch the install.

k8s-master-centos7.ks

install
text
eula --agreed
url --url=http://10.5.100.253/cblr/links/CentOS7-x86_64/
lang en_US.UTF-8
keyboard us
network --onboot=on --bootproto=static  --ip 10.5.100.15 --netmask 255.255.255.0 --gateway 10.5.100.254 --nameserver 10.5.100.253 --device=eth0 --hostname k8s-master.manintheit.org
rootpw root
firewall --disabled
selinux --permissive
timezone Europe/Berlin
skipx
zerombr
clearpart --all --initlabel
part /boot --fstype ext4 --size=512
part /     --fstype ext4 --size=1 --grow
authconfig --enableshadow --passalgo=sha512
services --enabled=NetworkManager,sshd
reboot
user --name=tesla --plaintext --password tesla --groups=tesla,wheel

#repo --name=base --baseurl=http://mirror.centos.org/centos/7.3.1611/os/x86_64/
#repo --name=epel-release --baseurl=http://anorien.csc.warwick.ac.uk/mirrors/epel/7/x86_64/
#repo --name=elrepo-kernel --baseurl=http://elrepo.org/linux/kernel/el7/x86_64/
#repo --name=elrepo-release --baseurl=http://elrepo.org/linux/elrepo/el7/x86_64/
#repo --name=elrepo-extras --baseurl=http://elrepo.org/linux/extras/el7/x86_64/

%packages --ignoremissing --excludedocs
@Base
%end

%post
yum update -y
yum install -y sudo
echo "tesla        ALL=(ALL)       NOPASSWD: ALL" >> /etc/sudoers.d/tesla
sed -i "s/^.*requiretty/#Defaults requiretty/" /etc/sudoers
/bin/echo 'UseDNS no' >> /etc/ssh/sshd_config
yum clean all
/bin/sh -c 'echo "br_netfilter" > /etc/modules-load.d/br_netfilter.conf'
modprobe br_netfilter
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

#Enable kubernetes repo
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
%end

Creating VLANs on KVM with OpenVswitch

VLAN is a crucial L2 network technology for increasing broadcast domain at the end it gives you better network utilization and security. If you are familiar with vmWare technology you can create a port group on a dVS or Standard switch. But If you need to segregate your network on KVM hypervisor, you need some other packages . In this tutorial I will show you how to create VLANs by using openvswitch and integrating it to KVM.

For this post, I assume that you already had openvswitch installed on your system. If not, follow here. I am also assuming that you have a physical NIC to bridge it to your virtual bridge(switch) which is created via openvswitch. By doing that you can connect to the outside world.

tesla@ankara:~$ sudo ovs-vsctl -V
ovs-vsctl (Open vSwitch) 2.12.0
DB Schema 8.0.0

Creating a Virtual Bridge with Openvswitch

$ sudo ovs-vsctl add-br OVS0 

Adding Physcical NIC to OVS0 Bridge

sudo ovs-vsctl add-port OVS0 enp0s31f6

In order to integrate the bridge which is created by openvswitch to KVM, we need to create XML configuration file which needed to be defined on KVM. You can see my configuration below.

<network>
 <name>OVS0</name>
 <forward mode='bridge'/>
 <bridge name='OVS0'/>
 <virtualport type='openvswitch'/>
 <portgroup name='VLAN10'>
   <vlan>
     <tag id='10'/>
   </vlan>
 </portgroup>
 <portgroup name='VLAN20'>
   <vlan>
     <tag id='20'/>
   </vlan>
 </portgroup>
 <portgroup name='VLAN30'>
   <vlan>
     <tag id='30'/>
   </vlan>
 </portgroup>
  <portgroup name='VLAN40'>
   <vlan>
     <tag id='40'/>
   </vlan>
 </portgroup>
<portgroup name='VLAN99'>
   <vlan>
     <tag id='99'/>
   </vlan>
 </portgroup>
 <portgroup name='VLAN100'>
   <vlan>
     <tag id='100'/>
   </vlan>
 </portgroup>
<portgroup name='TRUNK'>
   <vlan trunk='yes'>
     <tag id='10'/>
     <tag id='20'/>
     <tag id='30'/>
     <tag id='40'/>
     <tag id='99'/>
     <tag id='100'/>
   </vlan>
 </portgroup>
</network>

As per XML configuration above, we are creating a VLAN ID: 10, 20, 30, 40, 99 and 100.

Defining the configuration with virsh

virsh # net-define --file OVS0.xml 
Network OVS0 defined from OVS0.xml
virsh # net-autostart --network OVS0
Network OVS0 marked as autostarted
virsh # net-list 
 Name      State    Autostart   Persistent
--------------------------------------------
 default   active   yes         yes
 OVS0      active   yes         yes

After defining it, you will see that your XML file modified by KVM with uuid.

<!--
WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE
OVERWRITTEN AND LOST. Changes to this xml configuration should be made using:
  virsh net-edit OVS0
or other application using the libvirt API.
-->

<network>
  <name>OVS0</name>
  <uuid>a38bdd43-7fba-4e23-98f1-8c0ab83cff2c</uuid>
  <forward mode='bridge'/>
  <bridge name='OVS0'/>
  <virtualport type='openvswitch'/>
  <portgroup name='VLAN10'>
    <vlan>
      <tag id='10'/>
    </vlan>
  </portgroup>
  <portgroup name='VLAN20'>
    <vlan>
      <tag id='20'/>
    </vlan>
  </portgroup>
  <portgroup name='VLAN30'>
    <vlan>
      <tag id='30'/>
    </vlan>
  </portgroup>
  <portgroup name='VLAN40'>
    <vlan>
      <tag id='40'/>
    </vlan>
  </portgroup>
  <portgroup name='VLAN99'>
    <vlan>
      <tag id='99'/>
    </vlan>
  </portgroup>
  <portgroup name='VLAN100'>
    <vlan>
      <tag id='100'/>
    </vlan>
  </portgroup>
  <portgroup name='TRUNK'>
    <vlan trunk='yes'>
      <tag id='10'/>
      <tag id='20'/>
      <tag id='30'/>
      <tag id='40'/>
      <tag id='99'/>
      <tag id='100'/>
    </vlan>
  </portgroup>
</network>

Experiments

Let’s check on virt-manager if we are able to see the port groups.

Capturing Packages with Wireshark on Pyhiscal NIC that connected to th e OVS0

Compiling Archer T600U Plus WiFi USB Adapter on GNU/Linux with dkms

In this very short post, I am going to show you how to compile TPLINK USB Adapter module on GNU/Linux with dkms. I am using Pop OS 19.10.

$ sudo apt install git dkms
$ git clone https://github.com/aircrack-ng/rtl8812au.git
$ cd rtl8812au
$ sudo ./dkms-install.sh

If everything goes well you should get an output similar to below.

About to run dkms install steps...

Creating symlink /var/lib/dkms/rtl8812au/5.6.4.2/source ->
                 /usr/src/rtl8812au-5.6.4.2

DKMS: add completed.

Kernel preparation unnecessary for this kernel.  Skipping...

Building module:
cleaning build area...
'make' -j8 KVER=5.3.0-20-generic KSRC=/lib/modules/5.3.0-20-generic/build........
cleaning build area...

DKMS: build completed.

88XXau.ko:
Running module version sanity check.
 - Original module
   - No original module exists within this kernel
 - Installation
   - Installing to /lib/modules/5.3.0-20-generic/updates/dkms/

depmod....

DKMS: install completed.
Finished running dkms install steps.
$ ip link show
...
3: wlx34e894b147cc: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 2312 qdisc mq state UP mode DORMANT group default qlen 1000

Connect KVM over GRE

Hi Folks,

As you may know, Libvirt virtual network switches operates in NAT mode in default (IP Masquerading rather than SNAT or DNAT). In this mode Virtual guests can communicate outside world. But, Computers external to the host can’t initiate communications to the guests inside, when the virtual network switch is operating in NAT mode. One of the solution is creating a virtual switch in Routed-Mode. We have still one more option without changing underlying virtual switch operation mode. The Solution is creating a GRE Tunnel between the hosts.

What is GRE?

GRE (Generic Routing Encapsulation) is a communication protocol that provides virtually point-to-point communication. It is very simple and effective method of transporting data over a public network. You can use GRE tunnel some of below cases.

  • Use of multiple protocols over a single-protocol backbone
  • Providing workarounds for networks with limited hops
  • Connection of non-contiguous subnetworks
  • Being less resource demanding than its alternatives (e.g. IPsec VPN)

Reference: https://www.incapsula.com/blog/what-is-gre-tunnel.html

Example of GRE encapsulation
Reference: https://www.incapsula.com/blog/what-is-gre-tunnel.html

I have created GRE tunnel to connect to some of KVM guests from the external host. It is depicted in the Figure-2 how my topology looks like.

Figure-2 Connecting KVM guests over GRE Tunnel

I have two Physical hosts installed Mint and Ubuntu GNU/Linux distribution. KVM is running on the Ubuntu.

GRE Tunnel configuration on GNU/Linux hosts

Before create a GRE tunnel, we need to add ip_gre module on both GNU/Linux hosts.

mint@mint$ sudo modprobe ip_gre
tesla@otuken:~$ sudo modprobe ip_gre

Configuring Physical interface on both nodes.

mint@mint$ ip addr add 100.100.100.1/24 dev enp0s31f6
tesla@otuken:~$ ip addr add 100.100.100.2/24 dev enp2s0

Configuring GRE Tunnel (On the first node)

mint@mint$ sudo ip tunnel add tun0 mode gre remote 100.100.100.2 local 100.100.100.1 ttl 255
mint@mint$ sudo ip link set tun0 up
mint@mint$ sudo ip addr add 10.0.0.10/24 dev tun0
mint@mint$ sudo ip route add 10.0.0.0/24 dev tun0
mint@mint$ sudo ip route add 192.168.122.0/24 dev tun0

Configuring GRE Tunnel (On the Second Node)

tesla@otuken:~$ sudo ip tunnel add tun0 mode gre remote 100.100.100.1 local 100.100.100.2 ttl 255
tesla@otuken:~$ sudo ip link set tun0 up
tesla@otuken:~$ sudo ip addr add 10.0.0.20/24 dev tun0
tesla@otuken:~$ sudo ip route add 10.0.0.0/24 dev tun0

As GRE protocol adds additional 24 bytes of header, it is highly recommended to set MTU . Recommended MTU value is 1400.

Also do not forget to check iptables rules on both hosts.

Experiment:

Once configuration completed, I successfully ping the KVM guest(192.168.122.35) and transfer a file over SSH(scp). You can download the Wireshark pcap file here.

DRBD(without clustering)

Do you need transparent, real-time replication of block devices without the need for specialty hardware without paying anything ?

If your answer is YES. DRBD is your solution. DRBD is a distributed replicated storage system for the Linux platform. It is implemented as a kernel driver, several userspace management applications, and some shell scripts. DRBD is traditionally used in high availability (HA)

In this post, I am going create HA cluster block storage. Switching-over will be handled manually. But in the next post I will add cluster software. I have two Debian systems for this lab. It is depicted in the Figure-1 sample architecture.

Figure-1 Sample HA Block Storage

Reference:https://www.ibm.com/developerworks/jp/linux/library/l-drbd/index.html

Installing DRDB packages:

Install drbd8-utils on each of the node.

root@debian1:~# apt-get install drbd8-utils 

Add hostnames into the /etc/hosts file on each of the node.

192.168.122.70 debian1
192.168.122.71 debian2

Creating a file system:

Instead of adding a disk storage we create a file and use it as a storage on each of the node.

root@debian1:~# mkdir /replicated
root@debian1:~# dd if=/dev/zero of=drbd.img bs=1024K count=512
root@debian2:~# mkdir /replicated
root@debian2:~# dd if=/dev/zero of=drbd.img bs=1024K count=512
root@debian1:~# losetup /dev/loop0 /root/drbd.img
root@debian2:~# losetup /dev/loop0 /root/drbd.img

Configuring DRBD:

Add the configuration below on each of the node.

root@debian1:~# cat /etc/drbd.d/replicated.res
resource replicated {
protocol C;          
on debian1 {
                device /dev/drbd0;
                disk /root/drbd.img;
                address 192.168.122.70:7788;
                meta-disk internal;
                }
on debian2 {
                device /dev/drbd0;
                disk /root/drbd.img;
                address 192.168.122.71:7788;
                meta-disk internal;
                }
               
} 


Initializing metadata storage(on each node)

root@debian1:~# drbdadm create-md replicated
initializing activity log
NOT initializing bitmap
Writing meta data...
New drbd meta data block successfully created.
root@debian1:~# 

Make sure that drbd service is running on both nodes.

● drbd.service - LSB: Control DRBD resources.
   Loaded: loaded (/etc/init.d/drbd; generated; vendor preset: enabled)
   Active: active (exited) since Fri 2019-02-01 15:32:34 +04; 6min ago
     Docs: man:systemd-sysv-generator(8)
  Process: 1399 ExecStop=/etc/init.d/drbd stop (code=exited, status=0/SUCCESS)
  Process: 1420 ExecStart=/etc/init.d/drbd start (code=exited, status=0/SUCCESS)
    Tasks: 0 (limit: 4915)
   CGroup: /system.slice/drbd.service

root@debian2:~# lsblk 
NAME              MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
loop0               7:0    0  512M  0 loop 
└─drbd0           147:0    0  512M  1 disk 
sr0                11:0    1 1024M  0 rom  
vda               254:0    0   10G  0 disk 
└─vda1            254:1    0   10G  0 part 
  ├─vgroot-lvroot 253:0    0  7.1G  0 lvm  /
  └─vgroot-lvswap 253:1    0  976M  0 lvm  [SWAP]

DRDB uses only one node at a time as a primary node where read and write can be preformed. We will at first specify node 1 as primary node.

root@debian1:~# drbdadm primary replicated --force

root@debian1:~# cat /proc/drbd 
version: 8.4.7 (api:1/proto:86-101)
srcversion: AC50E9301653907249B740E
0: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r-----
ns:516616 nr:0 dw:0 dr:516616 al:8 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:7620
[==================>.] sync'ed: 99.3% (7620/524236)K
finish: 0:00:00 speed: 20,968 (13,960) K/sec
root@debian1:~# cat /proc/drbd
version: 8.4.7 (api:1/proto:86-101)
srcversion: AC50E9301653907249B740E
0: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r-----
ns:524236 nr:0 dw:0 dr:524236 al:8 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0
[===================>] sync'ed:100.0% (0/524236)K
finish: 0:00:00 speed: 20,300 (13,792) K/sec

Initializing the Filesystem:

root@debian1:~# mkfs.ext4 /dev/drbd0 
root@debian1:~# mount /dev/drbd0 /replicated/

Do not forget to format the disk partition(in this case /dev/drbd0) first node only. Do not issue the command(mkfs.ext4) on the second node again.

Switching-over the Second node

#On first node:
root@debian1:~# umount /replicated
root@debian1:~# drbdadm secondary replicated
#On second node:
root@debian2:~# drbdadm primary replicated
root@debian2:~# mount /dev/drbd0

Switching-back the First Node:

#On second node:
root@debian2:~# umount /replicated
root@debian2:~# drbdadm secondary replicated
#On first node:
root@debian1:~# drbdadm primary replicated
root@debian1:~# mount /dev/drbd0


Checking Connection without Telnet

Some of the minimal Linux distributions have no telnet client utility or similar utilities such as nc,ncat unless you install it. Most of time we need to do troubleshooting to check connection if  server/service is accessible. Do not worry–You still have mechanism inside the Linux kernel without installing above utilities. Take a look at below examples and change it accordingly for your case.

Checking TCP connection:

root@debian2:# echo > /dev/tcp/8.8.8.8/53 && echo "PORT IS OPEN" || echo "PORT IS NOT OPEN"
PORT IS OPEN
root@debian2:~# echo > /dev/tcp/google.com/80 && echo "PORT IS OPEN" || echo "PORT IS NOT OPEN"
PORT IS OPEN
root@debian2:~# echo > /dev/tcp/google.com/443 && echo "PORT IS OPEN" || echo "PORT IS NOT OPEN"
PORT IS OPEN
#I have to send signal SIGINT.
root@debian2:~# echo > /dev/tcp/google.com/123 && echo "PORT IS OPEN" || echo "PORT IS NOT OPEN"
^C-su: connect: Network is unreachable


Checking UDP Connection:

root@debian2:~# echo > /dev/udp/0.pool.ntp.org/123 && echo "PORT IS OPEN" || echo "PORT IS NOT OPEN"
PORT IS OPEN

 

 

How to Change I/O Scheduler in Linux

Linux kernel is one of the most complicated software which is being used variety of systems such as laptops, embedded devices, hand-held devices, Database servers, supercomputers etc., All these kind of devices  demand different type of requirements. Some of the applications require fast response to user input.

As you know, disk is the slowest physical device in Computer world. Even though, SSD disk are available in the market. I/O scheduler enables to access to disk optimized way. Linux kernel has variety of I/O schedulers that greatly influence the I/O performance. There is no the best I/O scheduler over the other. Each I/O scheduler delivers best performance based on application.

For example;  one study observed that the Apache web server could achieve up to 71% more throughput using the anticipatory I/O scheduler. On the other hand, the anticipatory scheduler has been observed to result in a slowdown on a database run. (http://www.admin-magazine.com/HPC/Articles/Linux-I-O-Schedulers)

Currently Linux has several I/O schedulers which are;

  • – Completely Fair Queuing (CFQ)
  •  – Deadline
  • – NOOP
  • – Anticipatory

 

I will not get into more detail on this post, but if you really wonder you can find in the link.

How to see active I/O Scheduler?

On my system active I/O Scheduler is CFQ. Shows that the current I/O scheduler (in square brackets) is cfq.

tesla@otuken:~$ cat /sys/block/sda/queue/scheduler
noop deadline [cfq]

How To change I/O Scheduler?

To change the scheduler, just echo the name of the desired scheduler

root@otuken:~# echo noop > /sys/block/sda/queue/scheduler
root@otuken:~# cat /sys/block/sda/queue/scheduler 
[noop] deadline cfq 

Kernel, does not change the I/O scheduler immediately  before all of the requests which belong to the previous one are  completed.

How to change I/O scheduler in Grub and Grub2?

https://access.redhat.com/solutions/32376

 

Sharing Internet in Linux

Hi Folks!

Today, I installed Ubuntu 18.04 LTS on my personal laptop. But I could not connect to the Internet as Ubuntu does not recognize my wireless driver. After couple of googling I have found my wireless driver .[model:Broadcom Limited BCM43142 802.11b/g/n]. But the problem is how I am going to hook-up to the Internet  to install my driver?

I realized that, I have my company’s laptop which is Lenovo T460 which is one of the best free-DOS laptop. 🙂 I booted it up with Ubuntu Live CD. Finally I made the configuration in the Figure -1.

 

 

 

 

 

Figure – 1 Sample Configuration For Sharing Internet.

After above configuration. Everything is worked excellent — I am able to hook-up to the Internet on my Asus laptop via Lenovo laptop.

To be honest, before above configuration I tried to bridge Ethernet interface with Wireless Interface on the Lenovo laptop. But It is not permitted. After some research I have found this.

http://kerneltrap.org/mailarchive/linux-ath5k-devel/2010/3/21/6871733

It’s no longer possible to add an interface in the managed mode to a
bridge. You should run hostapd first to pure the interface to the
master mode.

Bridging doesn’t work on the station side anyway because the 802.11
header has three addresses (except when WDS is used) omitting the
address that would be needed for a station to send or receive a packet
on behalf of another system.

Final:

Necessary package to be installed for Broadcam Wireless driver.

tesla@otuken:~$ sudo apt-get update
tesla@otuken:~$ sudo apt-get install bcmwl-kernel-source

After Installed the package and  rebooted my laptop. It WORKED LIKE A CHARM!