Finding Server IMM IP by IPv6 Multicast Address

Sometimes you may have a situation that IMM does not have a L3 connection that you need to go to Data-center and connect from your laptop to server IMM interface via Ethernet cable. Imagine a situation that your organization infrastructure was not documented very well that you do not know the IPv4 address of IMM interface to login.

If server IMM interface enabled with IPv6, then you are lucky — By using below methods you can easily figure out the IPv6 address of IMM interface and use it to connect to login IMM.

In this method by pinging IPv6 Link Local Scope multicast address(ff02::1), we are listing all the clients that joined to multicast group. According to Figure above ping will list only two multicast addresses that are your laptop and Server. Once we find server IPv6 Link local address on IMM interface, you can try to use login web browser by typing IPv6 Link local address which is fe80::5667:51ff:feb9:4ade%eth0 in that case. If it does not work, you can then use SSH Local Port Forwarding like below.

# ssh  -L 9090:[fe80::5667:51ff:feb9:4ade%eth0]:443 127.0.0.1

Open up the web browser and type https://127.0.0.1:9090

On windows; first find the interface ID with the below command. For this lab it is going to be used Wi-Fi (13)

netsh int ipv4 show interfaces

Idx     Met         MTU          State                Name
---  ----------  ----------  ------------  ---------------------------
  1          75  4294967295  connected     Loopback Pseudo-Interface 1
 13          55        1500  connected     Wi-Fi
 11          25        1500  connected     Ethernet
 12           5        1500  disconnected  Ethernet 2
 10          35        1500  connected     VMware Network Adapter VMnet1
  4          25        1500  disconnected  Local Area Connection* 4
 17          35        1500  connected     VMware Network Adapter VMnet8


ping -6 ff02::1%13

Note: You may get “Request time out.” when you try to ping multicast address on specified interface on Windows. But when you capture package via Wireshark, it still shows all the neighbor devices that joined multicast group.

Highly available Load-balancer for Kubernetes Cluster On-Premise – II

In the first post of this series, haproxy and keepalived installed, configured and tested.

In this post, two stateless Kubernetes web application will be deployed and domain names will be registered to DNS for these two web applications to test if Load-balancer is working as expected.

Note: For my home-lab, I am using the domain nordic.io.

For the Kubernetes cluster, I am assuming that, nginx Ingress controller deployed as DaemonSet and listening on port 80 and port 443 on each worker node.

Deploying Kubernetes Web Applications:

apiVersion: v1
kind: Service
metadata:
  name: hello-kubernetes-svc
  namespace: default
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 8080
  selector:
    app: hello-kubernetes
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-kubernetes
spec:
  replicas: 3
  selector:
    matchLabels:
      app: hello-kubernetes
  template:
    metadata:
      labels:
        app: hello-kubernetes
    spec:
      containers:
      - name: hello-kubernetes
        image: paulbouwer/hello-kubernetes:1.8
        ports:
        - containerPort: 8080
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: hello-kubernetes-ingress
  annotations:
    ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - host: helloworld.nordic.io  
    http:
      paths:
        - path: /
          backend:
            serviceName: hello-kubernetes-svc
            servicePort: 80

apiVersion: v1
kind: Service
metadata:
  name: whoami-svc
  namespace: default
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    run: whoami
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    run: whoami
  name: whoami
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      run: whoami
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      labels:
        run: whoami
    spec:
      containers:
      - image: yeasy/simple-web:latest
        name: whoami
      restartPolicy: Always
      schedulerName: default-scheduler
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: whoami-ingress
  annotations:
    ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - host: whoami.nordic.io  
    http:
      paths:
        - path: /
          backend:
            serviceName: whoami-svc
            servicePort: 80

Registering Web Apps to DNS:

Adding DNS Records one of the curial part. In order to use single Load Balancer IP to multiple services we are adding CNAME record. You can see bind dns configuration below to make it.

vip1 IN A 10.5.100.50
helloworld IN CNAME vip1
whoami IN CNAME vip1

Experiment:

Checking DNS Records.

[tesla@deployment ~]$ nslookup helloworld
Server:		10.5.100.253
Address:	10.5.100.253#53

helloworld.nordic.io	canonical name = vip1.nordic.io.
Name:	vip1.nordic.io
Address: 10.5.100.50

[tesla@deployment ~]$ nslookup whoami
Server:		10.5.100.253
Address:	10.5.100.253#53

whoami.nordic.io	canonical name = vip1.nordic.io.
Name:	vip1.nordic.io
Address: 10.5.100.50

Testing Services:

Hello World App:

Whoami App: